Some additional information about fixed-point iteration...

There is a way to find out if this method will

converge for a given interval, and also if the point it converges to is the only fixed point on that interval.

The fixed point theorem states:

Let g(x) (continuous on the interval [a,b]) be bounded between a and b, for all x on the interval [a,b]. Suppose additionally that g' exists on (a,b) and that a constant 0 < k < 1 exists with

|g'(x)| <= k, for all x on the interval (a,b)

The, for any number p

_{0} in [a,b], the

sequence defined by

p_{n} = g(p_{n-1}), n >= 1

converges to the

unique fixed point p in [a,b].

In simpler terms, if the function g(x) evaluated on the interval [a,b] obeys

a <= g(x) <= b

(it's entirely inside the square bounded by x=a, x=b, y=a, y=b) AND the first derivative is between -1 and 1 on that interval, there's a unique fixed point to which the iteration will converge.

Finally, something of note is that if you want to solve for the root of an equation using fixed-point iteration, and you transpose/manipulate the equation just right, to where it has the form g(x) = (function)/(function's derivative), with the same fixed point(s) as your original equation, Voila! You have Newton's Method with very rapid convergence.