Laguerre's method is a simple method for numerically computing a root of an arbitrary polynomial, that was developed by the French mathematician Edmond Nicolas Laguerre (1834-1886). The method is fairly simple, and can be shown to converge at a cubic rate, except in the vicinity of a multiple root, where it has only linear convergence. It is a "sure fire" method, that is guaranteed (well, almost anyway) to converge to a root of a polynomial no matter from what point the iteration is started.

Every nth degree polynomial may be written as a product of its n roots as follows:

P(x) = (x - x_{1})(x - x_{2}) ... (x - x_{n})

so, it follows that

ln |P(x)| = ln |x - x_{1}| + ln |x - x_{2}| + ... + ln |x - x_{n}|

Taking the derivative of both sides:

d ln|P(x)|/dx = 1/(x - x_{1}) + 1/(x - x_{2}) + ... + 1/(x - x_{n}) = G = P'(x)/P(x)

The absolute value signs have all disappeared, and G may be positive or negative. Taking the second derivative:

-d^{2} ln|P(x)|/dx^{2} =1/(x - x_{1})^{2} + 1/(x - x_{2})^{2} + ... + 1/(x - x_{n})^{2} = H = (P'(x)/P(x))^{2} - P''(x)/P(x)

If we are to assume that the root we are looking for, say x_{1} is isolated, while the others are clustered a certain distance away (a fairly drastic assumption, but not altogether unreasonable), we can come up with a simple iteration method that should converge to the root no matter where it is in the complex plane. If we denote the distances by:

a = x - x_{1}

b = x - x_{i}, for i = 2,3, ..., n

Then the two equations for G and H above may be rewritten as:

G = 1/a + (n - 1)/b

and 1/a^{2} + (n-1)/b^{2} = H

We may then solve for a as:

n
a = _________________________
2
G ± sqrt((n-1)(nH - G ))

where the sign is chosen to give a denominator with a larger

absolute value. One can then iterate as follows:

- Choose a trial root x
_{i}, and evaluate G, P, H, and a.
- Calculate a new trial root from x
_{i+1} = x_{i} - a
- Repeat steps 1 and 2 until a becomes sufficiently small.

One problem (or advantage, if you look at it another way) with this method is that it may converge to a complex root, because the calculation for a may result in taking the square root of a negative number. The chief advantage of this method is that it will certainly converge to some root of the polynomial *no matter what value you choose for your initial approximation*. It is a fairly efficient algorithm for numerically computing the roots of a polynomial, as it requires only a single evaluation of the polynomial and its first and second derivatives per iteration, which may be done fairly easily and efficiently, even when doing calculation by hand or with a small pocket calculator.

The only problem with Laguerre's method is that the theoretical analysis is not quite as complete as we would like. As previously noted, its convergence is third order when converging to a real, simple zero, and it has been shown that this is also true when converging to a simple complex zero. In the neighborhood of a multiple zero it can provide only first order convergence, but this is not a serious difficulty as multiple roots may be eliminated by dividing out the greatest common divisor of the polynomial with its derivative. Failure of convergence seems to be extremely rare, from empirical testing, so this method is a fine candidate for a general-purpose polynomial root-finding algorithm. However, most software packages for finding the roots of polynomials use other methods such as the Jenkins-Traub method that are less efficient, but whose analysis is better understood.

References:

Forman S. Acton, Numerical Methods That Work.

Anthony Ralston and Philip Rabinowitz, A First Course in Numerical Analysis

William H. Press, et. al., Numerical Recipes in FORTRAN