A combination of image compression and document storage algorithms piled into a standardized file format.

DjVu was originally developed by AT&T Labs (who released open source versions of encoders, decoders, and viewers), and was subsequently acquired by LizardTech (who made improvements and updates to the system, then made them all closed-source and all but killed DjVu entirely by trying to force it into the niche market they believed it belonged in).

The system itself is a collection of image compressors, a packaging system to manipulate individual compressed images and collections of images, and viewers and extractors. A Netscape plugin (as well as an Internet Explorer one) were released (and are still available). When LizardTech acquired and closed DjVu, the open-source version of the system were "adopted" and dubbed DjVuLibre, and the project continues to issue new releases (bug fixes, mostly, these days).

Compression Advantages
DjVu compresses different kinds of images differently; the most appropriate algorithm for a given can be chosen automatically by the compressor or manually by the user.

The compression is not lossless, but is done so efficiently that almost no perceptible quality loss occurs. Part of this efficiency is in how the compressor separates the background from the foreground. When possible, the library separates the "background" of an image (the paper grain, for example) from the text. When isolated, the background layer can be compressed via wavelet compression (see below), while the foreground layer is compressed with an algorithm better suited to black-and-white images.

Black-and-white images are generally compressed between 5 and 8 times that of a compressed TIFF file -- a 300 DPI scan of a full-page document can drop to 30KB when compressed by DjVu without quality loss. The DjvuJB2 encoder handles these. Foreground/background separation occurs when plausible; this isn't always effective (the DjvuDocument encoder does this for color line-art documents). Both encoders offer lossy and lossless encoding.

Color photographs are compressed using wavelet compression (similar to that used in the JPEG2000 specification), resulting in data sizes 5 to 10 times smaller than a JPEG encoding of the same image data at similar quality settings. The DjvuPhoto encoder handles these.

The compression engines offer different quality settings to optimize space and/or viewing on a desired platform (see below).

Storage and Transport Advantages
Anyone familiar with tar, ZIP, ARJ, ARC, or RAR formatted archives understands the benefits of packaging many files into a single archive -- no filesystem slack is created by having many files laying around, overhead is reduced (as the data can be indexed, making a seek faster), and additional metadata can be provided for each entry.

DjVu supports the notion of "pages" inside each DjVu file. A DjVu file can contain one image (or page), or many hundreds of images (pages). The library can automatically create and attach thumbnails to each page as well (similar to a Portable Document Format archive), as well as arbitrary tags (user-defined).

A single file can be used to serve up documents via a web page through the use of a CGI script included with DjVuLibre (the browser doesn't need to download the entire archive; the CGI script calls external supporting tools to create a DjVu archive containing just a desired page (or range of pages) and sends that instead).

Utilities exist to manipulate archives -- create a new DjVu file to collect multiple DjVu files (each containing one or more images) into it, add new images to an existing DjVu file, remove or reorder them, change their attributes and tags, and update their thumbnails.

Viewing Advantages
DjVu provides an efficient multiple-pass decoder, much like progressive JPEG encoding, with an emphasis on low memory usage even for huge images. A gigantic image compressed to 25MB by a DjVu compressor (trust me; if the compressed data is this size, you're talking about an absolutely huge image) will still only require 2MB of memory to decode and display. The viewer is fast and efficient, providing smooth panning and scaling even on modestly-equipped computer systems.

The levels of progressive encoding (and the quality of each level) can be specified at compression time; common settings create three encoding levels, each progressively higher quality than the last up to the third level which produces the original image. The viewer will show the first level first, permitting users to quickly view a lower-quality version of the image so they can find the image they're looking for without waiting for all the decoding passes to finish.

Commercial Features
Among LizardTech's proprietary extensions to this system are automated optical character recognition to encode the text body along with the image of the scan with each page of a scanned document, a viewer with knowledge of this feature providing searching capabilities, and more optimized compressors (better image quality, smaller file sizes, faster compression times).

Political Crap
The original authors of the DjVu library were initially pleased about LizardTech's acquisition of the library, but hopes were soon dashed as the company closed up the source entirely and brought forth new, closed-source, proprietary extensions and products. The company also seemed intent on making DjVu a niche product (document management, in the "scan and archive paperwork" sense). While it fits this need perfectly, it can do far more than this.

Recently LizardTech was acquired by a company named Celartem, who quickly replaced the leadership in place with a person apparently more "compatible" with the original creators' goals for the project.

Weaknesses
These are hard to find, to be honest, but they're present, nonetheless:

  • Dedicated viewer required. - You have to use a DjVu-aware viewer (like any other file format, your viewer needs to understand the image transport and compression methods). Because DjVu is both a compressor and an archiver, viewing becomes more complicated by the fact that you not only must decompress individual images, but must be able to gracefully handle navigation between pages within a single archive file (including pre-generated thumbnails and user tags). DjVuLibre includes a viewer (based on the QT toolkit, for better or worse), and includes a plugin for Netscape and Internet Explorer. The Netscape plugin does function in Mozilla, and probably works with Mozilla Firebird and Galeon as well. The plugin and standalone viewers are adequate, but not impressive. The library itself provides high quality zooming and panning, so that's about all you get with the viewers. They do support the thumbnails and multi-page documents, however.

  • No bindings for languages. - At present, you only get to play directly with the libraries if you're coding in C. If you want Perl, Python, or even PHP bindings, you're currently s.o.l. unless you want to write them yourself.
  • Clearly this is the kind of thing the original authors are most interested in -- imagine authoring yet another web-based image gallery that uses DjVu to compress and store images in a compact format but can present JPEG-encoded images for viewing in normal browsers on request, or that could package up and distribute selected images on the fly, all from the author's preferred web scripting language (I am aware that Perl, Python, and PHP are capable of more than web scripting). Without native bindings to the library, such a system is forced to rely on making a system call to run the standalone command-line utilities, with all the permissions, logitistics, and performance problems that entails.

  • No native browser support. - A plugin is a great first step, but DjVu could benefit greatly from built-in browser support for the format. If you ask your users to install a plugin, they get miffed these days. If their browser "just works," those problems go away.
  • No other viewers available. - Amazingly, only the viewer and plugins that come with DjVuLibre are available for actually *viewing* DjVu files. You can extract DjVu images back to TIFF and JPEG formats, but that sort of ruins the whole point of the exercise. No other applications (to my knowledge) have ever picked up the library and supported it. Personally, my life would be perfect if gqview would add support for DjVu files (including multi-page ones).
  • Personal Experience
    My 3.2 megapixel digital camera takes very large pictures. Images routinely exceed 500KB with JPEG compression. That's fine because I have a 1GB IBM Microdrive in the camera that makes life happy.

    A sample image from the camera:

    $ identify IMG_6203.JPG 
    IMG_6203.JPG JPEG 2048x1536+0+0 DirectClass 8-bit 352.7kb 0.000u 0:01
    
    And a simple before-and-after comparison of file sizes after compressing with standard compression settings:
    $ c44 IMG_6203.JPG img.djvu
    $ ls -l
    total 428
    -rw-r--r--    1 willfe   willfe       361190 Aug 20 16:42 IMG_6203.JPG
    -rw-r--r--    1 willfe   willfe        71442 Dec 20 05:47 img.djvu
    
    With the djview viewer, image quality was not visibly different from the original JPEG image; both images were high quality and visible artifacting was minimal and hard to spot.

    In real life use, I have found compressing from JPEG to DjVu reduces file sizes on average to 20% or less of their original size. Piling all the DjVu images together into a single file (archiving every picture taken in a day into a single file, for instance) tends to bring another 4-5% decrease in file size as a direct result of reduced overhead, even with thumbnails.

    I use DjVu myself to manage my monstrous collection of photographs taken by our digital camera. I've even written a Perl script to convert a directory of images, compress them to DjVu, add them all to an archive, generate thumbnails, and remove the originals (and then report the savings):

    #!/usr/bin/perl
    
    use File::Basename;
    
    $archname = shift @ARGV;
    $| = 1;
    
    for (@ARGV) {
        ($name,$path,$suffix) = fileparse($_,qr{\..*});
        @stat = stat($name.$suffix);
        $size = $stat[7];
        $totalsize += $size;
        print "$name$suffix: ".$stat[7]." b ";
        system('c44', $name.$suffix, "$name.djvu");
        @stat = stat("$name.djvu");
        $saved = 100 - (($stat[7] / $size) * 100);
        push @files, "$name.djvu";
        push @dels, $_;
        print " -> " . $stat[7] . " b (saved " . (int $saved) . "%)\n"; 
    }
    
    print "$archname: ";
    system('djvm', '-c', $archname, @files);
    system('djvused', '-f', '~/conf/djvu', $archname);
    print "done\n";
    @stat = stat($archname);
    print "Original images = $totalsize bytes\n";
    print "New archive size = " . $stat[7] . " bytes\n";
    print "  (saved " . (int (100 - (($stat[7] / $totalsize) * 100))) . "%)\n";
    print "Removing original image files: ";
    system('rm', @dels, @files);
    print "done\n";
    
    The contents of ~/conf/djvu:
    remove-thumbnails
    set-thumbnails 96
    save
    
    (these are just commands to djvused, which manipulates DjVu archives).

    Pass it first the name of the archive to create, then a list of JPEG images to compress into it. Archiving is performed last; you must have enough disk space to hold both the JPEG images and the DjVu images (twice over) for this to work.

    Running this on a directory of 92 JPEG photographs occupying 66MB of disk space results in a single file in the directory, containing images indistinguishable from the originals, taking up only 9MB of disk space for a savings of 87%. The run took nearly ten minutes on a dual P3 1GHz, though, so the compression process is not speedy. Decompression is much faster, though, no slower than JPEG decoding (and probably faster for larger images since the decoder doesn't explode the whole image into memory unless it's asked to).

    The low compression speed makes this not quite suitable for real-time applications, but it's surely acceptable for a photo gallery application. Decoding is fast enough for real-time applications though.

    In all, it's a pretty slick set of tools and an amazing compression algorithm. I look forward to scanning all my paperwork into this format as well; combined with a DVD-R drive, I estimate being able to store at least 50,000 photographs or 200,000 scanned black & white (or scanned color line-art documents) on a single 4.7GB piece of media. How's that for technology actually being helpful for a change? :)

    More information about the library (and source code for it) can be found at http://djvu.sf.net/.

    Log in or registerto write something here or to contact authors.