In most modern games, the 3d graphics looks impressive, at least for those who have monster systems. But at best these systems produce lighting effects which are an extremely bad approximation to how real lighting works (this is true even for the brand-new jizz-worthy Doom 3). And whilst the average hardcore gamer won't really care if their rockets have a slightly brighter glow or not, there are those in this world, mostly some sort of geek, who really cares about the whether their rendered image matches up pixel for pixel with a photo. In fact, someone somewhere cares so much that they feel this is a worthy area of research.

And so we have the Cornell Graphics Department, the definitive source on all things truly graphic. See

The basic premise is that the system must actually simulate the effects of light, including reflection, refraction, etc. For example, in most modern renderers the user can set an ambient light term, which fills the scene with uniform illumination. In reality, this "ambient" light is a result of diffuse inter-reflections which distibutes the light in an (exceedingly roughly) uniform manner. The effect is noticeable at corners where two surfaces meet; in reality there should be less illumination, but the ambient light method will miss it entirely. Whilst this may seem to be a very minor problem, the effect is actually very objectionable. With scene rendered by "photorealistic" renderers the problem is hidden by a great deal of specular reflections, the human viewer can clearly dicern something wrong with the lighting and perceive it to be false.

So then, the only way left is to do a physical simulation of light. Broadly speaking, there are two approaches to this problem, radiosity and via Monte-Carlo methods. The radiosity wu is pretty good already, so I'll just cover the latter.

The Monte-Carlo approach originates from research done at Los Alamos during the WWII. The researchers were working on penetration of neutrons into various materials by simulating neutron trajectories. These ideas where soon adapted to elsewhere, including that of simulating the trajectories of photons, and so naturally onto this idea. At its most basic, individual photons are bounced around the scene, and the best implementation of this is probably WinOSi (available at Strangely enough, this method is known as photon tracing. However, this approach is clearly extremely slow, since many photons emitted from the light sources will actually never contribute to the overall image. A second method is to start from the eye and work towards the light sources. This is clearly a generalisation of ray-tracing. Although this method may seem to be much better upon first thought, it is in fact mathematically provable that this results in just the same amount of work. Most modern research uses system which start at both and work towards the middle, so called bidirectional raytracing. However, the exact mathematical justification that this will result in a perfectly physically accurate result is formidable (see

As the image at Cornell show, those geeks have finally got their renderers which can match a photon pixel-by-pixel.