In numerical analysis, cancellation is a bad thing. When small floating point quantities are added to large floating point quantities, large errors can result. This fact is fundamental to numerical analysis.

An example, using the MATLAB notation:
>> x = pi;
>> y = 6.022e23;
>> x = x+y -y

x =

It's things like this that make it very difficult to write high quality general numerical libraries. For this reason the well-tested NETLIB libraries are still quite used, despite being written in fortran.

Catastrophic cancellation occurs in floating point arithmetic (hence the connection to numerical analysis) when two large almost-equal quantities are subtracted.

Suppose x>0 is large enough and ε>0 small enough that x and x+ε share the same floating-point representation; say x is a precise number (its floating point representation is indeed nominally x; the same will not be true of x+ε, whose floating point representation is nominally x, a different number). Then the relative error of representing x+ε as x is (x+ε)/x-1 = ε/x. Since ε is small and x large, the relative error for x+ε is tiny.

But what happens if we subtract the value x (which has relative error 0)? A catastrophe! Nominally, (x+ε)-x=ε is the correct answer. Instead, since x+ε has the same floating point representation as x, we get 0. The relative error here is infinite.

More commonly, x and x+ε will not share the same representation. But significant digits of ε will be lost due to floating point, and the relative error will again be huge.

The solution? Avoid using algorithms which subtract two nearly-equal numbers. This is sometimes easier said than done. It's often easier to try to use a tried and tested math library. But you still need to be aware of the undesired effects of cancellation.

Log in or registerto write something here or to contact authors.