When generating computer-generated imagery, something to be very wary of. The problem is the way that the eye works. The human eye is designed to detect edges. This means that people can detect very slight differences in colour if there is an edge between the two colours. In fact, the eye emphasises sudden changes in colour. Let's illustrate with a graphic. Let's say we have an image that has alternating bands of slightly dark grey and slightly light grey. The intensity graph might look like this as we go across the bands.

Intensity ^
(higher   |
is        |
lighter   |
grey)     |  
          |-----+      +------+
          |     +------+      +-----
                              moving across. 

The eye, because of lateral inhibition and also because of psychological effects, picks up the sharp transition, so it actually sees:

Intesity ^
(higher  |
is       |
lighter  |    /|        |\      /|
grey     |---/ |        | \----/ |
         |     | /----\ |        | /------
         |     |/      \|        |/
                                   moving across
The eye emphasizes the edges. This can lead to really horrible artifacts when rendering digital images. The solution is tricky, but usually involved jittering or dithering or some mix of the two. This is why for Hollywood movies, they actually render at 96 bits and dither down to 24 bits. It doesn't seem to make sense to start with why anyone would use 1050 colours, when the human eye can only see about 4 million until you realise this.