This is the canonical example of an ill conditioned polynomial, which was devised by Professor James Wilkinson (1919-1986) in a 1959 paper to illustrate the difficulties involved in the calculation of the roots of polynomials and by extension algebraic eigenvalues. We begin with the polynomial,

P(x) = (x + 1)(x + 2) ... (x + 20)

Expanding this polynomial shows that it has fairly large coefficients:

x20 + 210x19 + 20,615x18 + 1,256,850x17 + 53,327,946x16 + 1,672,280,820x15 + 40,171,771,630x14 + 756,111,184,500x13 + 11,310,276,995,381x12 + 135,585,182,899,530x11 + 1,307,535,010,540,395x10 + 10,142,299,865,511,450x9 + 63,030,812,099,294,896x8 + 311,333,643,161,390,640x7 + 1,206,647,803,780,373,360x6 + 3,599,979,517,947,607,200x5 + 8,037,811,822,645,051,776x4 + 12,870,931,245,150,988,800x3 + 13,803,759,753,640,704,000x2 + 8,752,948,036,761,600,000x1 + 2,432,902,008,176,640,000

The zeroes of this polynomial, -1, -2, -3, ..., -20, however, could scarcely be more isolated. Changing the coefficient of x19 by adding 2-23, which is a change of practically machine epsilon for single precision IEEE-754 floating point numbers, we get Wilkinson's polynomial. Now, intuition might tell us that such a slight change in the value of a single coefficient like that can't matter very much in the grand scheme of things, but we could not be more wrong. To nine decimal places, the roots of Wilkinson's polynomial are:

-1.000000000
-2.000000000
-3.000000000
-4.000000000
-4.999999928
-6.000006944
-6.999697234
-8.007267603
-8.917250249
-10.095266145 ± 0.643500904i
-11.793633881 ± 1.652329728i
-13.992358137 ± 2.518830071i
-16.730737466 ± 2.812624894i
-19.502439400 ± 1.940330347i
-20.846908101

Large changes have appeared in all the roots from -9 to -20, and ten of them have become complex, and significantly so in proportion to the magnitude of the coefficient change. These changes are real, not the result of mistakes in calculation of the roots, but the actual changes in the roots as a result of the minor change in the coefficients. This is a truly shocking result, and does not bode well for those engaged in numerically calculating the roots of high-degree polynomials or eigenvalues of large matrices. That means that in order to reliably compute the roots of polynomials of sufficiently high degree one would need to use at least double precision arithmetic, and if one is using deflation with synthetic division to eliminate roots as they are found, one had better get the roots to as high an accuracy as possible to avoid this kind of fiasco.

I suppose this must be an early example of what would later be called chaos theory. with the kind of sensitive dependence on initial conditions that are its hallmark. Numerical analysts beware, and if something like this bites you in practice remember Hamming's Motto: "The purpose of computing is insight, not numbers."