Binary Coded Decimal (BCD) is a method of storing numbers. It is suitable for making it easier for humans to read binary data, but not much else in a theoretical sense.

Each digit in the decimal number is encoded in 4 bits of standard binary.

0  0000
1  0001
2  0010
3  0011
4  0100
5  0101
6  0110
7  0111
8  1000
9  1001

Hence, the number 69 would look like
0110 1001
in BCD.

If you look at BCD data in hexadecimal notation, it looks just like regular decimal notation. Duh. BCD is just hex with A-F ignored. Just remember that they don't add or multiply the same...

If you run Gnome, you can see BCD in action, with Gnome's lovely useless Binary Clock applet.

Also known as BCD. This is a way of using four bits to count to 10.
Zero is 0.

  1. 0001
  2. 0010
  3. 0011
  4. 0100
  5. 0101
  6. 0110
  7. 0111
  8. 1000
  9. 1001
This is inefficient, of course, but it is useful in a couple of areas:
  1. Making drivers for 7 segment LED displays. Instead of using one byte for each digit on the display, you can stick two BCD numbers in each byte, and halve your required space.
  2. Accounting software. If you want to be accurate to the penny, then you need to work in the same base system as the currency. Calculating 1/5 in binary will leave you with a repeating fraction, which does not happen in the decimal system. Calculating 1/16 in binary will leave you with a number that does not get rounded, which does happen in the decimal system.

In assembly language parlance, BCD refers to a way of intermediately storing numbers - between input (where the numbers are in ASCII form) and computation (where the numbers are in their native binary form, being either big-endian or little-endian, depending on CPU architecture).

ASCII characters representing numbers are numbered from 48 (0) to 57 (9), so the easiest way to get an ASCII string into BCD format is to simply subtract 48 from the ASCII value of each character and then just deal with it in binary form. (Some architectures, such as x86, have instructions that will allow you to carry out arithmetical operations (+-*/) directly on raw ASCII data without converting it to BCD, but if the program needs to do anything deeper, it is usually better to convert from ASCII to BCD or packed BCD.) BCD is often used in scenarios where rounding would result in an undesirable loss of precision, since a BCD number can be any length, even greater than that of the host CPU's registers.

BCD has two formats - unpacked, in which one decimal number is stored in each byte, and packed, in which two decimal numbers are stored in each byte. Since it only takes four bits (also called a nybble) to store a number between 0 and 9, and a byte contains twice as many bits, space can be saved by storing two decimal numbers in one byte. Many coprocessors prefer their numbers in packed BCD format.

x86 has the following instructions that facilitate ASCII, BCD, and packed BCD arithmetic:

  • AAA - ASCII Adjust After Addition (Raw ASCII)
  • AAS - ASCII Adjust After Subtraction (Raw ASCII)
  • AAM - ASCII Adjust After Multiplication (Packed BCD)
  • AAD - ASCII Adjust After Division (Packed BCD)
  • DAA - Decimal Adjust After Addition (Packed BCD)
  • DAS - Decimal Adjust After Subtraction (Packed BCD)
The first four instructions allow the programmer to add, subtract, multiply, and divide non-binary-format numbers. Since the results of adding/subtracting two ASCII numbers are inaccurate both in binary AND in ASCII, it is necessary to adjust them, and this is what the first four instructions do. (As noted, AAM and AAD do require that the data first be converted to packed BCD.)

DAA and DAS are similar to AAA and AAS, except that they operate on packed-BCD-format data.

(Information partially paraphrased from IBM PC Assembly Language and Programming by Peter Abel, Prentice-Hall 1995.)

Log in or register to write something here or to contact authors.