One thing to keep in mind regarding wavelet compression is that wavelets are somewhat fractalish in nature, due to their recursive and self-similar mechanism of breaking up an image to be transformed.

For what it's worth, my 3D engine uses wavelet compression for everything, and even with a very simple pseudo-Haar transform, some of my textures get as much as 150:1 compression, and meshes often get down to 3 bytes per triangle (that's including texture coordinates and surface normals) - for example, one actual mesh (a 6962-triangle torus) compresses down to a 21370-byte file.

Admittedly, the insanely-high image compression ratio is on textures with very regular patterns - "real" images tend to get 2:1, and unfortunately, random data actually expands in size, which is quite typical of standard wavelet schemes. Fortunately, my engine doesn't require the use of wavelet-transformed CODECs; it has another CODEC which is basically a simplified PNG, and also many hooks for adding other CODECs as they're implemented.

Now, if I had a fractal wavelet transform, I'm sure I could convey even more information in less space. Damn Michael Barnsley though - one of the patents he holds is over partitioning an image up into pieces for fractal analysis, which is absolutely mandatory for fractal-wavelet image compression. Butthole.

In fact, I seem to recall that his partioning patent is specific in covering quadtree-based partitioning, which is exactly the sort of partitioning wavelets use. It seems almost as though he was quite specific in trying to remove any possible competitors in the fractal compression field, and even through such an asininely stupid and obvious patent, for anyone in the graphics field anyway.

Oh yeah. This has been yet another MaggieRant. And this one I think I am most definitely qualified to spew forth about. :)