Was promoted in the early 1990's as the future of digital image compression. The basic idea was, given an image, to find the particular fractal or combination of fractals that would generate the image after a few iterations of the fractal. The way to do this is to find bits of the image that were self-similar, but at different scales. For example, if you had a picture of clouds, then the edge of the cloud looks much the same if you look at the cloud or if you look at a tiny piece of it. So you say something like "this little bit of cloud has the same shape as the big one, only smaller", and you only have to store a few little pieces of information about how to generate the big cloud and that was it. This would give a huge compression ratio, since you wouldn't have to store millions of pixel values, rather just a few equations, which, when run iteratively on itself a few thousand times, would generate the original image. If you wanted a rough image, you would only iterate a few times.

The other purported advantage was that you could decompress the image at a higher resolution than you compressed the image at, the "fractal nature" of the encoding would interpolate in a more natural manner than simply enlarging the image.

The problem is that it hasn't come to fruition. The main problems were that:

Don't count it out yet, but it doesn't look like fractal image compression will be widely available for some time.