A ray tracer program has a strategy for producing a final image as follows:
• Read in the description of the scene to be rendered, and create the corresponding data-structures in memory. Quite often this input will be in the form of a text file, written by the artist or generated by a modeller, a program providing a convenient graphical interface for such geometric modeling. You can see an example of a POV-Ray input file in the dodecahedron node.

These input files will usually combine geometric 'primitives', like spheres, boxes and planes, or sometimes more complicated objects and surfaces, depending on the feature set of the raytracer, into more complex shapes using a kind of algebra for combining 3D shapes, known as CSG, or Constructive Solid Geometry - it uses the familiar operations of union and intersection, and the slightly less known one of difference, from set theory, mostly.

• The 'camera' position - the coordinates that the final image is seen from - is given in the file, as is the orientation and field of view. From this, it is possible to work out a value for the position of the 'screen' - the resulting image - which will be a plane perpendicular to the orientation of the camera and at a size and distance indicated by the field of view (many correct combinations of size and distance will do.)

• The ray tracer will then determine the correct coloration for each pixel in the image in turn, by tracing a 'ray' (simply a line) from the camera position through the corresponding pixel in the virtual 'screen'. 3D coordinate geometry is used to determine which scene objects (if any) are hit by the ray and which of these is closest to the camera. For example, in the case of a sphere object, it is simply a matter of determining whether the closest approach of the ray to the center of the sphere is less than the radius of the sphere, and then calculating the point of intersection with the sphere if it is.

The properties of the surface that has been 'hit' are then consulted. Depending on these, various things may happen. If the surface is reflective, a normal is calculated, and the ray is shot off at the reflected angle to see "where it came from". The surface may be transparent, in which case the ray is instead continued (possibly at an angle depending on the index of refraction of the object concerned) through the object, to see where that came from. In both these cases, the ray can be treated recursively. Otherwise, in the base case, a colour value is calculated depending on various properties that may be assigned to the surface and how exposed it is to the light sources defined in the input file.

In fact many or all these cases may happen simultaneously for any given pixel: a ray traced through it may hit an object that's reflective, translucent and coloured. The final results are combined and the corresponding pixel of the output image is coloured accordingly.

Obviously, 3D coordinate geometry, usually in the form of vectors and matrices, is extremely useful in these calculations.

A surprisingly large number of operations is needed for even quite simple scenes, and when many primitives and advanced features are used, massive rendering times can ensue. I've seen images that have taken literally weeks to render posted on the POV-Ray newsgroups.

Though it produces very striking images, a pure ray tracing model is insufficient for photorealism (generally speaking--with the right scene and the right artist it can get pretty close.) That's because of stuff like radiosity, the way light diffuses by bouncing around from object to object, which makes shadows appear soft and less-than-absolute in real life. A raytracing of a flashlight beam hitting a mirror will not show a reflected spot in the correct place, because there is no way for the tracer to know that particular surface was illumined via the mirror. Caustics, the curved shapes arising when light shines onto a surface through a curved refractive material, are not apparent, for similar reasons. In pursuit of realism, radiosity and photon modelling have been added "on top" of the pure raytracing model in many actual programs.

Even with such add-ons, the images produced often tend to look 'hyperreal' rather than realistic (though it must be noted that the flexibility of ray tracing software makes any generalisations about the type of images produced a bit suspicious - many different styles are possible, and not all artists aim at strict realism, though this is a notional design goal of the software.)

You can see some great (and some not-so-great) ray traced images at the website of the IRTC - the Internet RayTracing Competition, at www.irtc.org.

Ray tracing as a hobby can consume some serious user-cycles, as well as CPU cycles, and its adoption, like nethack, should be treated with some caution.