The Fibonacci numbers are a commonly-given example in Computer Science and in Mathematics. They will undoubtably make an appearance in every single introductory programming course. It is therefore perhaps surprising that an efficient method for computing them is so little-known! Instead, we make do with methods which take time O~(n) (or even, horror of horrors, time O~(Fn)!) to compute Fn.

The recursive definition

F0=0
F1=1
Fn+1=Fn+Fn-1
leads to a straightforward implementation. Unfortunately, this implementation is horribly inefficient: to compute Fn+1, you need to compute Fn and Fn-1, and perform an extra addition. Solving the recurrence relation yields time O~(Fn) to compute Fn -- this is time exponential in n, therefore exponentially exponential in the size of the input n.

Most courses will therefore teach a somewhat faster algorithm: at any given step, keep variables x and y, with x being Fn and y being Fn+1. The loop updates by saying

x' = y
y' = x+y
(a temporary variable will often be needed to prevent one update clobbering the other). This loop requires about n iterations to compute Fn, so the time required is O~(n) -- linear in n, but exponential in the size of n.

But this algorithm too is rubbish! Binet's formula gives us a hint: by computing powers of some irrational quadratic surd, we can get the answer. And lots of people have seen how to compute the n'th power in time O~(log n). Why not apply it here, and be done in logarithmic time?

Unfortunately, this approach cannot work: you'll need to compute an indecently large number of digits of sqrt(5) (or alternatively of the golden ratio φ), which will nullify your advantages. We need to take powers of something else.

That something is the square 2x2 matrix

[[1 1]
 [1 0]]
This matrix (call it G) has the property that Gn is exactly
[[Fn+1 F n ]
 [F n  Fn-1]],
as can easily proved by induction. (People with a background in linear algebra might note that the matrix's characteristic polynomial is the familiar x2-x-1, so that its eigenvalues are precisely the golden ratio φ and its complement -φ+1/2. Really bored people with a background in linear algebra will go on to diagonalize the matrix and derive Binet's formula. Really really bored people will go on to derive the general solution for a linear recurrence from this technique.)

So, to compute Fn in time O~(log n), we need to show how to compute Gn with O(log n) matrix multiplications. But that's easy (see also fast exponentiation for an explanation of this process)!

G0 = I
G2k+1 = G2k*G
G2k = (Gk)2
So we can compute Gn efficiently; the rest is merely coding.

This is probably about the most efficient method of calculating Fibonacci numbers. It's exponentially better than "efficient" linear algorithm, and exponentially exponentially better than the naïve translation of the recurrence relation!


Note on "O~(f(n))"

This formalism, used above, will not be too familiar. It means that O(f(n)) arithmetic operations on words are required, where words are assumed to be "big enough" to hold the result. Of course this isn't true -- words have some limited size. If we wish to have unbounded n, we must replace these operations with operations on variable-width numbers: a 256-bit adder will be slower than a 32-bit adder (even if a carry lookahead adder is used); in a program, no parallelism is possible, and the slowdown is even greater. This is particularly important with number which grow exponentially with n, like the Fibonacci numbers: the number of digits required for Fn is O(n).

Still, counting arithmetic operations is easy to do, and is accurate enough for most purposes. If required, we can translate the time bounds above to absolute bounds ("time O(g(n))") by multiplying by appropriate factors. The logarithmic algorithm remains much better.