The number of digits that "matter" subjectively when recording or transmitting a number.

"7" has one significant digit.

"7 000 000 000" also has one significant digit.

"4.56792" has 6 significant digits.

So do "7 309 140 000" and "3.14159."

Or more objectively, the number of digits which can be relied upon to be accurate.

Each successive computation performed with approximate numbers (such as those expressed in a floating-point register in a computer), will produce some amount of additional error.

If you perform many calculations using numbers with relatively few significant digits, each successive step in the calculation will have fewer significant digits.

Compare and contrast accuracy and precision. Each additional significant digit increases precision. Each approximated digit that actually matches the correct value is an increase in accuracy.

The total number of digits, from the leftmost nonzero digit, that are expected to be accurate, is the number of significant digits.

When using theoretical calculations, it is often a good idea to express calculations and data in significant digits. A calculation has the same number of significant digits as the number in the original problem with the least amount of significant digits. Here are a few rules for significant digits.

  1. All nonzero digits are significant.
  2. All zero digits which are the final digit after the decimal point are significant.
  3. All digits in between significant digits are significant.
  4. All other digits are not significant.

You may have to use scientific notation to express some significant digit calculations, such as 49+51, which equals 100, but since "100" has one significant digit, you must express it as (1.0 * 10^2) or 1.0E2. Some practice:

  • 321: 3
  • 300: 1
  • 0.000003: 1
  • 0.300000: 6
  • 3.205: 4
  • 7.0000000000000000000: 20

Significant Digit Patterns in Arithmetic:

In multiplication and division, the number of significant figures in the answer is the least number of significant figures in the calculation (once rounded). For example:

4.56 * 1.4 = 6.38 rounded-> 6.4

4.56 / 1.4 = 3.25 rounded-> 3.3

As the above examples demonstrate, both answers have two significant digits, the least amount in the calculation (1.4 has two significant digits).


In addition and subtraction, the answer has the same number of decimal places as the least precise measurement used in the calculation (after rounded). For example:

12.11 + 18.0 + 1.013 = 31.123 rounded-> 31.1

18.0 - 12.11 - 1.013 = 4.877 rounded-> 4.9

In both cases, the answer has 1 decimal place, because the least precise measurement, 18.0, only has 1 decimal place.

Significant digits could not be more eloquently described than in the words of Herman Melville:
The skeleton dimensions I shall now proceed to set down are copied verbatim from my right arm, where I had them tattooed; ... I did not trouble myself with the odd inches; nor, indeed, should inches at all enter into a congenial admeasurement of the whale.

Herman Melville, Moby Dick

Unit definitions have infinite significant digits, because every digit is completely accurate. For instance, 1 kilometer = 1000 meters. This will always be true. An infinite number of zeros are implied after the decimal point. So when converting units, the number of significant digits in the conversion factor is never relevant.

Log in or register to write something here or to contact authors.