Sounds really complicated, but it's really not.

The plenoptic ('pleno'='total' & 'optic'='light') function is a very trendy function in computer graphics research these days. So why are all of those graphics geeks so excited by this function? Well, you saw the movie, The Matrix, right? Remember the scene in the beginning when Trinity jumps up in the air and the camera does a 360° lookaround? Ever try out Quicktime VR? Ever wonder if we'll ever have 3-dimensional television? Well, the plenoptic function is fundamental to all those things.

Here's the one-line definition: the plenoptic function is geometrical optics' view of the flow of light in the world. Don't get it? Read on.

Remember that in geometrical optics, light is assumed to travel in straight lines in vacuum and in air1. Rays of light don't bend or disappear unless they hit something.

Light is emitted - for example from a point on the screen of your computer monitor - and then goes straight out from that point in every direction. When one of those light rays hits something - like a mirror, a beer bottle, your hand, whatever - it gets refracted, reflected, scattered, or absorbed. If it's reflected, scattered or refracted, it goes straight in another direction until it hits something else. And then it gets reflected or refracted again, ad infinitum. (By the way, making pictures by simulating this process with a computer is called ray tracing.)

So at any location out in the open, there's a bunch of light rays going through that location on their way between hits with the objects in your room. In fact, there's a light ray going through that point in all possible directions (well, almost). To see that this is true, pick any point in space, and pick some direction through that point. If you keep going in the reverse direction, you'll eventually hit something (unless you're outside at night, and that direction keeps going straight into space). That's where this particular straight portion of the light ray begins. Go in the forward direction, you'll hit something else. That's where the straight portion ends.

To resolve the problem with a light ray coming from the blackness of space (i.e. nowhere), we associate a color with every light ray. That way, if it appears that there's no light ray in a certain direction at a certain location, you could just say that there actually is a light ray, but it's black!

When the light ray gets emitted, it has a certain color. When the light ray hits something and gets reflected, refracted, or scattered, its color may change. But otherwise, out in the open air, the color of a light ray stays the same as the light travels2.

So, here's the punchline. The plenoptic function describes, for every point in space and every direction through that point, the color of the light ray through that point and in that direction. Wasn't that simple?

I bet you're wondering. How does this help with 3D TV? Well, to make that work, we need to (1) figure out what the values of a plenoptic function (the colors of the light rays) are in the scene that you want to broadcast (2) compress those values and transmit them (3) receive the values and decompress them, and finally (4) reconstruct those values in your living room. And you have to do all four steps 30 times a second or so, so you can transmit moving scenes and not just static ones.

Step (1) is starting to mature, as the commercial applications above attest. To make the job easier, researchers take advantage of the fact that the color of the light ray stays the same as it travels. To figure out the values of the plenoptic function, they just take pictures of the scene from various viewpoints.

Steps (2) and (3) are in the beginning stages of research. There's just a ton of information in a plenoptic function - in the order of hundreds of gigabyes per second if you want a high-resolution picture.

Step (4) has been stuck in the research stage since the 1970s. It'll probably stay there for a while longer. The device that performs step (4) is called an autostereoscopic display.

So that's it. The next time you meet your computer graphics geek friends you can impress them by asking them what a plenoptic function is. And then make fun of them if they don't know.


1 Well, mostly. A mirage is an example of a phenomenon that violates this claim.
2 Again, mostly. The coloration of the sky is a notable exception.
The rest of this writeup is for hardcore computer graphics people.

The term 'plenoptic function' was coined by two computer vision researchers in 1991 [1]. The plenoptic function is a seven-dimensional function in its most general form. It gives the time-dependent radiance contributed by a certain wavelength for a light ray. In other words, the function is

L(x, y, z, φ, θ, λ, t).
For static scenes and assuming an RGB representation, this function can be reduced to three five-dimensional functions:
LR(x, y, z, φ, θ)

LG(x, y, z, φ, θ)

LB(x, y, z, φ, θ)

The dimensionality can be further reduced to four if we assume that all the values along a given line in space are the same. Using this assumption precludes the representation of certain types of scenes. Certain effects of occlusion can no longer be represented. We also need to use a new parameterization, called the two planes parameterization, to specify an arbitrary line in space [2,3]:
LR(u, v, s, t)

LG(u, v, s, t)

LB(u, v, s, t)

See the references for details.

[1] E.H. Adelson and J.R. Bergen, "The plenoptic function and the elements of early vision," in Computational Models of Visual Processing, Landy and Movshon, Eds. MIT Press, Cambridge, Massechusetts, 1991, ch 1
[2] M. Levoy and P. Hanrahan, "Light Field Rendering," in Computer Graphics, Annual Conference Series, 1996
[3] S.J. Gortler, R. Grzeszczuk, R. Szeliski, M.F. Cohen, "The Lumigraph," in Computer Graphics, Annual Conference Series, 1996

Log in or register to write something here or to contact authors.