The Fibonacci numbers are a commonly-given example in Computer Science and in Mathematics. They will undoubtably make an appearance in *every ***single** introductory programming course. It is therefore perhaps surprising that an efficient method for computing them is so little-known! Instead, we make do with methods which take time O^{~}(n) (or even, horror of horrors, time O^{~}(F_{n})!) to compute F_{n}.

The recursive definition

F_{0}=0

F_{1}=1

F_{n+1}=F_{n}+F_{n-1}

leads to a straightforward

implementation.
Unfortunately, this implementation is horribly

inefficient: to compute F

_{n+1}, you need to compute F

_{n} and F

_{n-1}, and perform an extra

addition. Solving the

recurrence relation yields time O

^{~}(F

_{n}) to compute F

_{n} -- this is time

exponential in n, therefore exponentially exponential in the

*size* of the

input n.

Most courses will therefore teach a somewhat faster algorithm: at any given step, keep variables x and y, with x being F_{n} and y being F_{n+1}. The loop updates by saying

x' = y

y' = x+y

(a temporary variable will often be needed to prevent one update clobbering the other). This loop requires about n

iterations to compute F

_{n}, so the time required is O

^{~}(n) -- linear in n, but exponential in the size of n.

But this algorithm too is rubbish! Binet's formula gives us a hint: by computing powers of some irrational quadratic surd, we can get the answer. And lots of people have seen how to compute the n'th power in time O^{~}(log n). Why not apply it here, and be done in logarithmic time?

Unfortunately, this approach cannot work: you'll need to compute an indecently large number of digits of sqrt(5) (or alternatively of the golden ratio φ), which will nullify your advantages. We need to take powers of something else.

That something is the square 2x2 matrix

[[1 1]
[1 0]]

This matrix (call it G) has the property that G

^{n} is exactly

[[F_{n+1} F_{ n }]
[F_{ n } F_{n-1}]],

as can easily proved by

induction.

(People with a background in linear algebra might note that the matrix's characteristic polynomial is the familiar x^{2}-x-1, so that its eigenvalues are precisely the golden ratio φ and its complement -φ+1/2. Really bored people with a background in linear algebra will go on to diagonalize the matrix and derive Binet's formula. Really *really* bored people will go on to derive the general solution for a linear recurrence from this technique.)
So, to compute F_{n} in time O^{~}(log n), we need to show how to compute G^{n} with O(log n) matrix multiplications. But that's easy (see also fast exponentiation for an explanation of this process)!

G^{0} = I

G^{2k+1} = G^{2k}*G

G^{2k} = (G^{k})^{2}

So we can compute G

^{n} efficiently; the rest is merely coding.

This is probably about the most efficient method of calculating Fibonacci numbers. It's exponentially better than "efficient" linear algorithm, and *exponentially exponentially* better than the naïve translation of the recurrence relation!

#### Note on "O^{~}(f(n))"

This formalism, used above, will not be too familiar. It means that O(f(n)) arithmetic operations on *words* are required, where words are assumed to be "big enough" to hold the result. Of course this isn't true -- words have some limited size. If we wish to have unbounded n, we must replace these operations with operations on variable-width numbers: a 256-bit adder *will* be slower than a 32-bit adder (even if a carry lookahead adder is used); in a program, no parallelism is possible, and the slowdown is even greater. This is particularly important with number which grow exponentially with n, like the Fibonacci numbers: the number of digits required for F_{n} is O(n).

Still, counting arithmetic operations is easy to do, and is accurate enough for most purposes. If required, we can translate the time bounds above to absolute bounds ("time O(g(n))") by multiplying by appropriate factors. The logarithmic algorithm remains *much* better.