## The Problem

One morning in 1988, I found myself sitting in an examination room
in Canberra, Australia in the Canberra College of Advanced Education
(which became the University of Canberra in 1989). It was an ungodly
hour to be already awake, and for the next four and a half hours I had
little to do other than stare at a piece of paper containing three
questions. *Three*. Ninety minutes each, surely - but the
International Mathematical Olympiad simply doesn't work that way.

Question one was geometry, circles within circles, wheels within
wheels. A collective sigh of abject despair filled the room. The Eastern Bloc countries might stand a
chance with that one, but aside from half-hearted diagrams drawn with
ruler and compass, it seemed unlikely any in this room of English
speakers would give
it a try. Question two oozed combinatorics and set theory. Reading
the question was enough to see me wishing for a stiff drink and
a lie down. Maybe it would fall to the pigeonhole principle. After
five minutes of getting nowhere fast, I
realized I had completely misunderstood the question. Maybe I'd come
back to that one later.

And so, onward, to question three. This one looked promising, for the
conspicuous appearance of the number *1988*. This was the one.
I remembered the words of the deputy leader the previous evening.
*There's always one question that contains the year, and it's
usually the easy one*. If this one didn't come out, we'd be
doing this again tomorrow, and I didn't want to be
going into Day Two with no score.

*"A function f is defined on the positive integers by: f(1) = 1;
f(3) = 3; f(2n) = f(n), f(4n + 1) = 2f(2n + 1) - f(n), and
f(4n + 3) = 3f(2n + 1) - 2f(n) for all positive integers n.
Determine the number of positive integers n less than or equal to 1988
for which f(n) = n."*

10 percent inspiration and 90 percent perspiration was about right.
After spending more than four hours writing down table after table
of values, the inspiration came. I needed to rewrite all the tables again,
only this time in binary. The answer appeared pretty quickly - the
function f reversed the binary digits (bits) of n. Now,
all I had to do, was prove it, count up the answer,
and turn it in for 7 points out of a possible 21 on Day One. Might even
be good enough for a bronze.

I had twenty minutes left.

## The Easier Problem

Making sense of the IMO problem (without four hours of blood,
sweat, and tears) requires the same sort of supernatural insight that
you'd need to pluck the solution to Counting 1 bits out of thin air.
Here's a considerably less obfuscated definition of a similar function,
again using a recurrence relation.

f(2n) = f(n)/2
f(2n+1) = f(2n) + f(1)

This function is slightly different, because it preserves leading and
trailing zeroes. It takes just one

substitution to

prove f(0)=0, and not much more to
show that, once you choose a value for f(1), you can compute f(2),
f(3), f(4), just as far as you'd like. Set f(1) to 128, for instance, and
you'll be able to build a

lookup table that would reverse every

8-bit binary number from 0 (00000000

_{2}) to 255
(11111111

_{2}). If you need n bits, just set f(1) to
2

^{n-1}.

## Uses of the bit reversal function

It is perhaps surprising to discover that there are uses of the bit
reversal function beyond mere recreational mathematics. Many of the
most efficient algorithms in mathematics and computer science are
based on the principle of divide and conquer. If you can partition the
input data into two (or more) smaller parts, and deal with each part
separately, chances are, you're on the way to a good algorithm.
Recursively subdivide the parts into smaller and smaller
pieces until they are reduced to a trivial case, then reassemble the
results.

In many cases, the recursion falls out in the most beautiful way.
The data can be bisected exactly, and the algorithm applied to
the two halves. These algorithms can often be executed "in-place". The
data does not have to be moved or copied from its original location,
and the results may even overwrite the original data if so required.
The data is kept close together in computer memory which improves
efficiency - it could even be read or written from external
storage media such as disk or tape while the algorithm runs.

The Discrete Fourier Transform (DFT) is a calculation that has
its roots in spectral analysis but has applications in many other
fields of computation, including the Discrete Cosine Transform used
in image compression technologies such as JPEG. The Fast Fourier
Transform (FFT) is a divide and conquer approach to the DFT, most
easily applied when the input size is a power of 2. However, when you
study how the FFT works, the division of data appears not to be
ideal. Instead of splitting the data into a first half and a second
half, it is split into even and odd items. It *is* possible to do,
with a bit of intricate bookkeeping, but it is surely unpleasant, and
it becomes very difficult to skip through the data after it has been
divided time and time again. Often, the dataset for this sort of
calculation is huge, and it simply is impractical to go over it more
than once or twice.

However, with the bit reversal function, things become much, much
simpler. Instead of storing the individual data elements in their
regular order, store them by the bit-reversed value of their index.
The even components are now all at the beginning, and the odd
components are all at the end. Even as the recursion proceeds, all the
data for each progressively smaller part of the calculation appears
*in precisely the right order*. The algorithm can proceed very
efficiently, and while the reordered data is not particularly
human readable, the result is usually just one step of many. The data
can be reordered on initial input and final output (a relatively
fast process) while the computer can perform all the hard work in
between as optimally as possible.

### Sources

Scholes, John. *29th IMO 1988*.
http://www.kalva.demon.co.uk/imo/imo88.html, as viewed on 08 July 2005.

Press, Flannery, Teukolsky, and Vetterling. *Numerical Recipes in C -
The Art of Scientific Computing*. Cambridge University Press,
1988.