The Doom III rendering engine is the latest paradigm shift in 3D game engines, developed by the creator of pretty much every paradigm shift in 3D game engines, id software's John Carmack.

An engine is the software core upon which modern computer games are built. Although it usually comprises many other modules – user input, sound and music, networking for multiplayer games – I will focus on the graphics rendering and try to explain, without too many technical details, why the Doom III engine is the next technological revolution in this area.

3D engines are basically smoke and mirrors: It is not possible, or even necessary, to simulate a true 3D world down to the pores in the skin of the characters (à la holodeck); rather, the world must just look real enough for the suspension of disbelief in the player. Traditionally, game engines use different techniques to achieve this in different areas. For example, the world in Doom I and II is displayed by a ray-casting algorithm, while objects and creatures are pre-rendered sprites (flat images).

A notoriously difficult part of real-time 3D graphics is lighting. Light and shadows are absolutely crucial to make a scene look real, yet realistic lighting equations are very complex and computationally expensive. Consequently, this is the area with the most "faking" in 3D games. Quake introduced lightmaps: A grid of points is spread across the walls, ceilings and floors of the world. Then, a radiosity algorithm calculates the light level of each point and stores it in a low-resolution texture map. This is done during development, when a level is "compiled". At runtime, the pre-calculated lightmaps are simply multiplied with the regular textures, darkening them where there are shadows. Of course, this implies that the game world must be static. Each time the designers move a wall around in a level, the lightmaps must be recalculated. And it does not work for objects, creatures and players that move through areas of varying brightness.

Doom III unifies the lighting for everything in the game world. Nothing is pre-calculated, and thus every light can be manipulated in real-time – from swinging lamps to monsters throwing fireballs. The engine calculates shadow volumes for each light (think of cones extending from the light and wrapping around the silhouettes of nearby objects and walls) and draws them into the graphic card's stencil buffer. Then, the whole scene is drawn with the stencil buffer blocking out areas in shadows. This process is repeated for every light before the scene finally appears on the screen. On older graphic cards that don't have enough parallel texture units to combine Doom III's texturing into one render pass, the amount of redraw can quickly go up to 50 or more passes, likely turning the game into a slideshow on older hardware. This is why the new Doom concentrates on confined spaces and fewer, scarier enemies. Of course, as graphic cards and CPUs become faster during the next couple of years, the engine will scale beautifully, and we will see rooms full of imps again.

Another novelty (although seen in other recent games) is the "poly-bump" technique: Every creature as well as the building blocks for the levels are constructed as very high-resolution 3D models. Then, the models are reduced to low-polygon versions for use in the game, and a bump map is created from the missing details. Since lighting in the game is calculated at the level of individual pixels, this texture can then be used to distort light directions so that it appears as if the detail is still there. Naturally, this results in a lot of work for the artists – but on the other hand, it gets rid of the time-consuming texture painting and skinning step in the art pipeline, where traditionally, details are drawn into texture maps that are wrapped around the 3D models.

The genius of John Carmack lies in the foresight of realizing 2 or 3 years ago (when he began work on the engine) that this approach would be perfect for today’s graphic accelerators, while scaling nicely for at least 3 years. The next generation of graphic cards has floating-point precision everywhere, and almost unlimited possibilities for programming at the pixel level, so he will have to write at least one more engine before he can retire and devote his time entirely to building rocket ships.

Sources, other than being a game developer: Various interviews with Carmack and technical articles.
For further reading, a good start is NVIDIA's developer center at

Log in or register to write something here or to contact authors.