One of the most famous expressions used in the average kindergarten classroom made a bit more confusing by Intel. Seeing as computers never make mathmatical errors, I guess 1+1=1.99999999486 after all.

In the early days of the 8086 architecture, in order to make certain routines as efficient as possible, Intel programmed a pretty good Floating Point Number (FPN) addition routine. Unfortunately, they didn't test the chip out properly and found that the processor understood FPN 1 plus FPN 1 equals FPN 1.9999999486. This ovbiously sucked, and since the 8086, Intel has corrected the error and made it more efficient.
Actually, the bug was in the 586 (Pentium)'s floating point division. Unless you're talking about something much older, in which case I've never heard of it. Either way, Intel CPUs didn't have FPUs until the 486DX, which came much after the 8086.

Hardly anyone understands this bug anyway, they all just like to laugh like idiots at equations like "1+1=1.999999486". Most of them are Mac users. Go figure.
Doubtful this floating point arithmetic mistake has ever been made, even by Intel.

FPUs represent real numbers as a binary number (the mantissa) times a power of 2 (the exponent). 1 would be represented as a mantissa of 1 and an exponent of 0. (Internally, it's a mantissa of 0 and an exponent of 127 for IEEE 754 single precision, but that's beside the point.)

This means that no accuracy is lost in the translations of the two numbers to binary form. The process of adding is then easy. Since the exponents are the same, the exponent of the result is one more than that of the addends, and the mantissa of the result is just the sum of the addends bit-shifted right by 1. The addends and the answers are infinitely precise.

An FPU with zero bits of precision would still get 1+1 right.

Well, addition is easy. Division is hard.