Introduction: Uses of Mixed Graphics and Live Action

The art of mixing animation and film has existed as a special effects staple since the days of the earliest science fiction films; naturally this has included computer animation ever since it became good enough to produce anything worth watching. The first film of note to make extensive use of computer graphics was Tron, which wowed its early-eighties audience with its neon lighting and computer-generated motorbikes. Since then we have seen a handful of completely computer-animated feature films, and ever-increasing use of computer-generated effects in other films and on television. Another major area where computer graphics and live action are mixed is Augmented Reality - computer graphics superimposed in realtime onto views of the real world. This has found promising applications in surgery (allowing surgeons to view internal organs in tandem with the usual outside view), aeronautics (allowing pilots to see landing strips obscured by fog, among other things), military communication (providing soldiers with information which is otherwise unavailable) and 'virtual heritage' (allowing visitors to see reconstructions of archaeological sites as they wander around the real thing). For more on this see the Augmented Reality node.

In films, special-effects extravaganzas like The Matrix remain the most prominent outlet for computer graphics (see CGI) mixed with live action, but as the techniques become more sophisticated we are seeing more and more subtle use of the technology to replace things which might once have been done with live action but now work out cheaper on computer: take films like The Fellowship of the Ring, in which whole crowds engaged in battle were generated in computers and seamlessly integrated with live-action film, or Titanic, in which expensive computer effects were used to create many shots which could have been done without them but at even greater cost.

On a similar note, while the number of television programs using obvious computer-generated effects is steadily growing, we are also seeing more and more use of 'virtual studios' in television programmes, sometimes to create fantastic backgrounds and effects like the Tomorrow's World virtual studio but just as often they are used to imitate realistic settings like the Sydney waterfront backdrops used in the BBC's coverage of the 2000 Olympics, for which it would have been impractical to use the real thing.

Putting live actors on computer graphics backgrounds

The basic techniques used today to put images of actors onto made-up backgrounds have a good deal in common with techniques which have been in use for many many years already. The problem is to make the undesired parts of the scene invisible; usually this is achieved by using some kind of chroma-keying: Making parts of the picture transparent according to their colour. Originally chroma keying worked only on a binary basis: A particular part of the final image could come from either one source, or the other. The release of Ultimatte in 1978 introduced a subtler process which is much harder to spot if done right; each part of the foreground image is transparent in proportion to the luminance of its blue channel.

Computers are now able to do the same job more cheaply, and with more versatility; any hue or range of hues can easily be chosen as the 'key colour(s),' and it becomes possible to take account of other information from the picture, such as saturation. Another significant development is the invention of the 'no-blue' technique to reduce 'spill', the problem of the foreground being illuminated by the colour screen and becoming inappropriately see-through, especially around the edges. 'No-blue' works by replacing the traditional blue or green background with special retroreflective material, and illuminating it using a ring of LEDs around the camera. Retroreflective materials reflect light back directly the way it came; they are also used in catseyes on roads, and the reflective strips used by cyclists. In the case of 'no-blue', the light from the LEDs which hits the screen almost all comes straight back to the camera, so it is bright enough that the computer knows which bits need to be screened out. The light which hits ordinary objects in the foreground is scattered all over the place, so it is easily swamped by the studio lighting.

There are several other techniques available to reduce spill. A system known as KineFlos uses fluorescent backdrops lit with ultraviolet light to light the background without illuminating the foreground. Another way to reduce the light reflected from the colour screen onto the subject is simply to mask the parts of the screen which are far enough from the subject's edge not to be critical.

Even with all of these this techniques, without careful fine-tuning it is all too easy to end up with visibly wrong edges on your characters when using bluescreening - it's not uncommon to see this even in productions which for the most part look very professional.

Putting computer graphics objects and characters on photographed backgrounds

Since 3D renderers often allow you to specify a transparency channel directly, the problem of defining which parts of the picture should be see-through is very much simplified - although there is a danger of excessively clean lines being obvious if the opacity at the edges of the image is not right. Another problem is that objects fresh from the ray-tracer tend to look far too clean and polished unless a great deal of care is taken; to avoid this special effects artists put a lot of work in making sure their textures are slightly rough - by putting a certain amount of random noise into their colour maps, bump maps and spectral maps, for instance.

General Issues for Special Effects

Having got your actor (or whatever) separated nicely from the rest of the scene, the next challenge is to synchronise any events between the two sources. For a live actor on a computer graphics backdrop this might mean using screened-out props to play the part of computer graphics objects - a computer graphics chair could be matched by a real-world chair or just a block, for instance. Animated characters in real scenes might also need to interact with real objects, like a dinosaur knocking down trees or Roger Rabbit hiding in Bob Hoskin's shirt. A very rough analogue of the forces involved will often do the trick: As long as a tree comes down when a tyrannosaurus rex crashes into it, no-one but an effects buff going to be thinking about the hidden ropes that might be pulling it over.

Matching camera movements is equally important: To achieve this end, systems have been developed for tracking the precise movements of the real camera to within 1mm or 0.01ยบ of pan/tilt/roll. The BBC use a system whereby a small camera is pointed towards the ceiling to record the positions of sets of unique markers relative to the main camera. This information is fed into a computer which uses it to quickly calculate the exact position and orientation of the camera; this in turn controls the position and orientation of the virtual camera pointing at the computer graphics backdrop. The system is called free-d, and has been licensed by several other companies.

Matching the focus (both the point of focus and the depth of field) is also crucial. Software exists now to automatically match computer graphics focus with inputs from cameras. The BBC has produced software which allows focusing to be taken into account even with what is basically a 2D backdrop: A simple depth map allows the computer to defocus parts of the picture as appropriate. In many cases a 2D backdrop of this sort is easier to obtain, and less computationally intensive to deal with. As long as camera movement is restricted to pan, tilt and zoom it works quite nicely. This is licensed under the name of D-focus.

It's also crucial to make sure that lighting is consistent: This means making lights in the virtual scene to match those in the real scene or vice versa, and also matching up the black levels: One of the most common mistakes made in compositing is to fail to make sure that the darkest parts of the computer graphics image are no darker or lighter than the darkest parts of the photographic image.

The overall texture of the film or video presents another problem; every way of filming has a characteristic grain, and mismatches are not hard to spot. Modern compositing software often provides powerful tools for adding the right kind of noise to rendered images to match it up with footage from real cameras.

Finally, details like reflections and shadows can be crucial to a convincing composition. In putting live actors onto computer graphics backgrounds, it's often possible to use luminance information from their real shadows to create the shadows on the computer graphics scene. For this to work the real scene has to be substantially the same shape as the computer graphics version, or in a shape which the computer can map onto the virtual scene as required. Conversely, making shadows and reflections for computer graphics objects placed in real scenes requires a 3D model of the real-world set.

Conclusions

There can be little doubt that we will continue to see more and more integration of computer graphics with live action and real-world sets. These effects will keep getting more convincing and easier to put together as the technology improves, and new uses for them are constantly being found and developed. Virtual TV presenters are just starting to make their mark, while the virtual studio is heading for ubiquity. In films, whole new vistas of possibilities are being opened up by the technology, allowing film makers to construct breathtakingly convincing fantasy realities and synthetic characters of unprecedented realism. The future should be very interesting to watch...

References

  • Steve Bradford, The Blue Screen Page
  • (http://www.seanet.com/~bradford/bluscrn.html)
  • Bob Kertesz, Ultimatte According to Bob
  • (http://www.gregssandbox.com/bs/kertesz.htm)
  • Terrence Masson, Details, Details, Details..., Visual Magic Magazine 15/11/99 (http://visualmagic.awn.com/vmag/articleprint.php3?article_no=2&page=2)
  • Graham Thomas, Digital TV in Studio Applications, lecture at Essex (http//esewww.essex.ac.uk/campus-only/level4/ee411/gat.pdf)
  • SFX for Television, Post Magazine (http://www.postmagazine.com/features/animation/1000sfx_for_tv/sfx_for_tv.htm)
  • VES 2000: Visual Effects up close, Cinematographer (http://www.cinematographyworld.com/article/printerfriendly/0,7226,100646,00.html)