# Coherence in Computer Graphics

Look around an electronics store and you will find dozens of extremely powerful 3d-graphics cards that will allow your computer to play games rapidly approaching photorealism at frame rates much higher than motion pictures. For less than $100 your computer can do a real-time animation that would have taken weeks to render just two decades ago. If you do the math, graphical processing power has increased well ahead of the curve of general computer processors. How is this possible?

The answer is simply **Coherence**, a word bandied about by computer graphics educators everywhere. Essentially almost anything you would like to render with a computer will be made up of some set of mostly-continuous data. Consider the fact that with 256 possible colors an 800x600 pixel screen can display 256^{480,000} possible pictures. The vast majority of those would simply look like noise, but the ones that look interesting would all have one thing in common: continuous areas of related color. The result is the calculating the value of a pixel in a computer-generated image can often be simplified by using the value of a previously computed pixel.

In modern consumer hardware the assumption is that the most computationally expensive graphics to render are ones based on 3d models and lighting. Then they use the coherence inherent in these models to specifically design hardware to solve a handful of optimized equations. Generally these ideas are tested in software then added into hardware for the next generation of graphics cards.

The advanced algorithms used today are so optimized as to require months or years of study to fully understand. Simpler algorithms from decades ago are a much better illustration of the principle of coherence.

### Examples:

#### Bresenham's Algorithm

Bresenham's Algorithm draws a straight line on a 2d raster. This is one of the most basic algorithms in computer graphics, yet also one of the most optimized. The naive method of drawing a line might be to use the parametric representation and the location of each pixel one by one.

y ^
| OOO
| OOO
| OOO
| OOO
| OOO
+--------------------->
x

Bresenham's Algorithm first picks the x or y axis so that the slope is always between 1 and -1, then takes advantage of the fact that each pixel x_{n} will be equal to x_{n-1}+1, and that the y value will either vary by 1 pixel or not at all depending on the slope.

So this algorithm takes advantage of the fact that each pixel in a line is very-closely related (ie. coherent with) the previous pixel. The result is an order-of-magnitude increase in performance without any special hardware.

#### Z-Buffer Algorithm

The Z-Buffer Algorithm is a method of hidden-surface removal from 3d scenes. It has the advantage of being able to work with polygons regardless of what order they are passed in, and handling intersecting gracefully without weird holes or artifacts being generated. On the downside it takes a lot of memory, and has to be done as the final step in the rendering process after a lot of unnecessary work may already have been done.

The Z-Buffer Algorithm works with 3d coordinates, the x and y being screen coordinates, and the z being depth. For each pixel on the screen, the algorithm tracks a last-drawn depth, and for each pixel of a polygon to render, it calculates whether that pixel is shallower than the last-drawn depth for the pixel. If so then it is rendered, otherwise it is not.

If only triangles are passed to the algorithm, then it turns out there is an extremely efficient way to calculate subsequent pixel depths along the lines of Bresenham's algorithm.