The **Levenberg-Marquardt method** is a weighted average of the Steepest Descent method and Newton's method for finding the minimum of a given system of non-linear functions.

Let F(x) denote the matrix form of the non-linear system (where x is the parameter vector to each function in the system):

-- --
| f1(x1,x2,...xn) |
| f2(x1,x2,...xn) |
F(x) = | . |
| . |
| fn(x1,x2,...xn) |
-- --

Newton's method requires a good initial approximation (x(0)), then uses the sequence:

`x(k) = x(k-1) - J(x(k-1))`^{-1}F(x(k-1)),

which converges rapidly to a solution x (if the initial approximation is good enough). Here, J is the Jacobian matrix, the matrix of partial derivatives of f1,f2,...fn with respect to x1,x2,...xn.

The Steepest Descent method 'flattens' the system into a single function:

m
----
\ 2
G(x)= > [f (x)]
/ i
----
i=1

Then the minimum of G(x) is found and used as the initial approximation for Newton's method.

Newton's method requires O(n^{3}) operations to find the minimum of F(x) because it requires significant accuracy in the computation of n^{2} partial derivatives (the Jacobian Matrix of F(x)). The advantage of Newton's method is that, once the initial values are accurate enough, the minimum is quickly attainable. On the other hand, the Steepest Descent method arrives at 'good' initial values quickly. For this reason, the Levenberg-Marquardt method initially weights in the direction of Steepest Descent, and when convergence is detected, weights towards Newton's method.

*This method is used by Mathematica 4.0 when told to do so. Information gathered from my Numerical Analysis text:*

Burden, Richard L. __Numerical Analysis__. Brooks/Cole, 2001. (pp. 643-4)