Most people who have had a little

physics know that

Fourier transforms are very useful. But fewer have considered some of the drawbacks of the

Fourier transform, and fewer still how to get around these drawbacks.

One of the biggest problems with taking the Fourier transform of some time domain signal -- say a digital recording of a symphony -- is that when you transform, you lose information about the time dependence of the different frequency components of that signal.

Think about a flute in that symphony playing a note. An interesting fact about the flute is that, when played well, it can produce a time domain signal that is very close to a perfect sine wave. The frequency of that sine wave, of course, determines its pitch. So the flute is first silent. Then its sine wave slowly becomes louder and louder. Then it grows silent again. When I take the Fourier transform of this signal, I get a peak at the frequency of the sine wave. Its width will vary with how long the flute played the note. But -- here's the important part -- if I only sample the note for a short while in the middle of its loudest part, and then take the fourier transform of that signal, I'll get the same result as I would if the note had been played forever (from minus infinity to plus infinity in time). This is called box normalization in quantum mechanics, and is necessary for dealing with any pure sinusoid (in QM this is the free particle).

So how do you deal with this? There are two ways and they are related. One method involves slicing up the time axis into chunks, and then Fourier transform each chunk, lining up each piece, one next to the other, to produce a two dimensional surface, the length of which is time, the width of which is frequency, and the heigth of which corresponds to the amplitude of a given frequency at a given time. This is basically the Wigner distribution (named after the famous Eugene Wigner).

The other solution is more difficult to visualize. But you can understand it by analogy with the Wigner distribution. By doing wavelet transforms, you are effectively filtering specific frequency components of the orginal signal (i.e. slicing the signal up in the frequency domain) and then using a so-called mother wavelet as a template, you compare the resulting components by dilating (blowing them up and shrinking them) and translating them in time (making them happen earlier or later). These nifty tricks allow electrical engineers to build devices with better signal to noise performance, and give bored scientists something to ponder in their free time.