Was promoted in the early 1990's as the future of digital image compression. The basic idea was, given an image, to find the particular fractal or combination of fractals that would generate the image after a few iterations of the fractal. The way to do this is to find bits of the image that were self-similar, but at different scales. For example, if you had a picture of clouds, then the edge of the cloud looks much the same if you look at the cloud or if you look at a tiny piece of it. So you say something like "this little bit of cloud has the same shape as the big one, only smaller", and you only have to store a few little pieces of information about how to generate the big cloud and that was it. This would give a huge compression ratio, since you wouldn't have to store millions of pixel values, rather just a few equations, which, when run iteratively on itself a few thousand times, would generate the original image. If you wanted a rough image, you would only iterate a few times.

The other purported advantage was that you could decompress the image at a higher resolution than you compressed the image at, the "fractal nature" of the encoding would interpolate in a more natural manner than simply enlarging the image.

The problem is that it hasn't come to fruition. The main problems were that:

Don't count it out yet, but it doesn't look like fractal image compression will be widely available for some time.
One thing to keep in mind regarding wavelet compression is that wavelets are somewhat fractalish in nature, due to their recursive and self-similar mechanism of breaking up an image to be transformed.

For what it's worth, my 3D engine uses wavelet compression for everything, and even with a very simple pseudo-Haar transform, some of my textures get as much as 150:1 compression, and meshes often get down to 3 bytes per triangle (that's including texture coordinates and surface normals) - for example, one actual mesh (a 6962-triangle torus) compresses down to a 21370-byte file.

Admittedly, the insanely-high image compression ratio is on textures with very regular patterns - "real" images tend to get 2:1, and unfortunately, random data actually expands in size, which is quite typical of standard wavelet schemes. Fortunately, my engine doesn't require the use of wavelet-transformed CODECs; it has another CODEC which is basically a simplified PNG, and also many hooks for adding other CODECs as they're implemented.

Now, if I had a fractal wavelet transform, I'm sure I could convey even more information in less space. Damn Michael Barnsley though - one of the patents he holds is over partitioning an image up into pieces for fractal analysis, which is absolutely mandatory for fractal-wavelet image compression. Butthole.

In fact, I seem to recall that his partioning patent is specific in covering quadtree-based partitioning, which is exactly the sort of partitioning wavelets use. It seems almost as though he was quite specific in trying to remove any possible competitors in the fractal compression field, and even through such an asininely stupid and obvious patent, for anyone in the graphics field anyway.

Oh yeah. This has been yet another MaggieRant. And this one I think I am most definitely qualified to spew forth about. :)

This technique works very well for textures - ie repeated patterns used to cover 3d objects. The demoscene uses them extensively to get amazing 3d demos to fit in 64k; since no actual 'textures' are stored, only equations to generate those textures, the amount of data required to display them is drastically reduced.

This system has the pleasant side effect of meaning that since the entire thing, assuming we're still talking about demos, is generated using equations only, there is in fact no upper resolution limit - with a sufficiently powerful computer and a good demo, the demo could be displayed at very high resolution with no pixelation of textures or blockiness of models, though after a certain point the models would start needing more data to describe them in order to look decent (curves would begin to show their component lines; smoothing only works so far) though if you used mathematically generated curves in the first place you could get to extremely high resolution without needing to add data.

Log in or register to write something here or to contact authors.