This is an extremely good algorithm for estimating the square root of a number. Amazingly, it was known to the ancient Babylonians, even though it is a special case of what is today called Newton's method!

Substituting f(x)=x^{2}-a in the formulae in Newton-Raphson method, we get this iteration (for approximating sqrt(a)):

x_{n+1} = (x_{n} + a/x_{n})/2

which makes some sort of odd sense (in

multiplicative terms, x

_{n} is "as close" to sqrt(a) as a/x

_{n}, so why not take their (

additive)

average?

The starting value x_{0} can be anything you like; 1 or a/4... or any better guess you might have for the root.

In a practical floating point implementation, a can always be taken in [1,4) (since the square root of the even part of the exponent is known **exactly**), and only a finite precision is needed. Thus, we may take x_{1}=1, and calculate the exact number of iterations required (saving ourselves the overhead of executing a loop). The main problem is ensuring roundoff error doesn't cause some slight misconvergence.