CGI stands for Computer Generated Images (or imagery). Most people, when referring to CGI, really mean 3D elements and special effects in film or TV. CGI is, however, a much bigger category than that, and the term is somewhat misleading. CGI can include any image "generated" by a computer at all, including a crappy MS Paint illustration. A Photoshop painting, or even a picture taken by a digital camera could be considered a "Computer Generated Image", but most people use it to refer to on screen visual effects. In any case, using it to describe 3d Animation/Visual effects sells an art form quite short.

It seems to suggest that images made with a computer are "programmed", or magically generated somehow. Someone on Slashdot made a comment something like "Computers took the magic out of special effects. These days you can do anything just by throwing enough computers at it". Another movie reviewer complained that The Hulk didn't look good because it was programmed by a bunch of nerds in a basement somewhere.

This attitude is completely wrong, and it sells a lot of artisitic work really short. Nobody typed in 'dinosaur' into a computer and had it generate a T-rex for Jurassic Park. People had to, quite painstakingly, design, make, and then animate the dinosaurs in that movie.

This writeup is a summary of the process of creating a 3D character (or creature) from scratch, as an element in a motion picture. The steps are, in roughly this order (it's a bit flexible, but this is a convenient order often followed), with classical art equivelants, where possible:


  • Design/Photography/Setup: This isn't really a step so much as just all of the work that needs to be done before the CG work can start. It includes all of the design work for the character (which is usually done mostly or entirely by hand, by a concept artist). If the shot involves any real photography, it would need to be shot. Some things can be done while the 3D work is being done, e.g., modeling, texturing, and rigging can be done without the plate photography but you do need the design work before anything else.


  • Modeling: Making a computer model of the 3D Element. In this case, it would be a character. The goal is to make a model that looks as much like the concept work as possible. Obviously, the more detailed and accurate the concept drawings are the more straightforward this process is, but there is always a certain amount of invention that is needed. A 2D drawing allows the artist to "cheat" in ways that are simply not possible in 3D. This might be in the form of one arm being longer than another, or things looking different from different angles (faces are notorious in this way, they have to be more or less symmetrical to look good in a model, but the face is drawn from a 3/4 view and the two sides are completely different from each other). Modeling is much like sculpting, in a lot of ways. The translation from 2D to 3D is often more difficult than it would seem, and not for technical reasons. Models are usually made using polygons or NURBS.


  • Texturing: Texturing is defining the appearance of the surface of a 3d object. To texture something, one finds a way to fit one or more two dimensional images onto the 3D surface. The surface needs to be unwraped first, which involves laying out the polygon mesh, or UVs, onto a flat 2D image. For complex 3D shapes, such as a face, there is no way to get a 2d image onto a 3D shape without some stretching, so it's always a compromise. Usually the goal of an unwarp is to hide the stretching and pinching, or spreading it out so that it's less noticeable. Whatever works. After this is finished, one can start painting the textures that will be mapped onto the model using the unwrap.

    First is the color map- it defines what color the surface is, and where. The rendering will do shading due to light and shadow, so ideally, a texture map is just the color of every part of the model if it were exposed to equal light. Often there is "cheating" involved, such as shadows in wrinkles, or making the inside of the nose dark or black, since the program won't do a very good job with drawing shadows in features created with a bump map. Being able to paint is necessary to make a nice color map.

    The specularity map is also important. The spec map shows where a model is more shiny. On a face the spec map would be mostly black, with spots all over the cheeks and nose for oily pores and what-not, lips would be lighter colored with black cracks, wrinkles would be dark, etc. For a rusted, tarnished, or dirty metal, the dirty spots would be less shiny, so dark compared to the rest of the surface. The specular map would probably use the same unwrap as the color map, so the two are usually coordinated.

    A bump map defines bumps in a surface, wrinkles, pores, anything small enough not to be seen in a sillouette. For bumps big enough to be seen in a sillouette, a displacement map should be used, which actually modifies the polygons to make bumps (but is computationally expensive).

    Usually a color map is painted in Photoshop or Painter, in many different layers, and then duplicated and modified to create the other necesary maps, to make sure all of the features sync up right.


  • Rigging: Putting a skeleton and/or control system into the character, to make it movable. The surface will controlled by the bones. While it sounds straightforward, good rigging is actually fiendishly complicated at times and an art form and science in itself. A good rig is a rig that allows an animator to animate easily and effectively. Many rigs have things like foot rolls, IK and FK bone systems, and the like.

    After building a rig, you have to attach the model to it. This is called enveloping in Softimage XSI, and Binding in Maya. It involves basically defining what part of the surface is affected by which bones. Areas like the shoulder are always problem areas, as they are affected by lots of bones. Getting nice deformations is difficult. There are lots of tricks such using a lattice to control the character, and using bones to control the lattice.

    Facial Rigging is usually done (partly) by the modeler, who models the face in a number of expressions and allows the animator to combine them to get a range of facial expressions.


  • Animation: Animating the character is defining the movement and performance of the character. This is done using keyframe animation, which involves setting keyframes on bones and control objects at certain times in certain positions. It's popular to save keys on everything on the same frame, in order to organize the animation a bit. The computer interpolates between the keys, and creates F-Curves (Function Curves) to control the object's movement. Because there are three dimensions, there are three separate curves for translation (movement) keys, X, Y, and Z translation. There are also three curves for rotation, (rotation around the X, Y, and Z axes).

    The computer assumes nothing about your animation. There is no gravity in its mind. Characters are just a bunch of geometry attached to some control objects which are floating in space. Making it appear as though the character has weight, inertia, and is obeying laws of physics is the animator's job. Besides this, the animator needs to 'act' through the character. The alternative to this is motion capture.

    Motion capture is rigging up a real, human actor to an insanely expensive device that attempts to record his movements in 3D space, and apply it to a CG character. It never works perfectly, and motion capture animation always needs to be cleaned up. The advantage an animator has over motion capture is that an animator can make movements more extreme and faster and has better control over his animation than a director has over motion capture. As animation, although realistic, motion capture isn't particularly interesting to look at. Still, making realistic animation is very difficult, and often motion capture is used instead.


  • Simulation: Simulation includes cloth simulation (self explanatory), soft body and hard body dynamics simulations, fluid simulations, particle simulations, and more. Cloth simulation is simulation of cloth or anything that acts like cloth (like plastic, for example). It often doesn't work very well, going through obstacles instead of colliding with them. It takes some coaxing to get it to work. Soft body simulations are where something is soft but not cloth, something that has substance like jello or a rubber ball. Hard body simulations are where things collide with each other but don't bend. Fluid simulations are relatively new and very computationally intensive, and deal with simulating fluids. Particle simulations can be almost anything, such as rain, dust, flies, and ants. All simulations can be effected by forces, such as gravity, friction, etc. Simulations are a bit difficult to get to work the way you want them to, and are a bit temperamental. Maya simulations often explode. XSI simulations often don't work, or just work really badly. New simulations programs are being written all the time, for hair, ductile fracture, etc.


  • Lighting: Much like traditional lighting for photography. One places lights of arbitrary colors, intensity, types and directions around 3D space, in order to mimic the light of the shot or, in the case that there is no real light, to make something that looks right/good.

    Lights can be point lights (all light comes from a single point and radiates outward), spot lights (all light comes from a single point but only in a certain direction in a cone that is an arbitrary angle wide), or infinite (all light comes in parallel rays from a certain direction). Point lights and spot lights can also be area lights, which means instead of light emitting from a point, it emits from a surface, usually a square or a disc. Area lights are useful because the shadows they cast are much more like real shadows on the edges than shadows from point sources. That also takes more rendering time.

    Lights can be set to only include certain objects, or exclude certain objects. This is useful and can be used for a lot of different effects, e.g., inclusive lights can be used to simulate bounce light.


  • Rendering: Rendering is the only part of this process that could fairly be called "computer generating". Thats where the computer turns all of the above information into images.

    Most serious productions use pass rendering. This means they break up the rendering into layers, much like Photoshop layers, to be composited back together later. This done because rendering can take a long time, and after changing something in 3d, you need to render all over over again to see the changes. Let's say you have a window with light coming in and hitting a character, casting a shadow on the floor. The floor is also bouncing light back up onto him. With pass rendering, if you render a different layer with the character with each light source, in a compositing program (such as Adobe After Effects) you could very easily darken and lighten the individual light sources, to make less bounce light, more sunlight, or whatever you want until you get the composition you want. It would take a long time to get it the way you want it in the 3D program, but in compositing, its very fast. Also, you can color correct every element on its own. It's very useful.


    There are lots of new features in renderers, including radiosity (and all of the different methods for faking it such as Mental Ray's final gathering), caustics, sub surface scattering, HDRI and so on, which are all getting into wider use.

  • Compositing/integration: Putting it all together. This often involves getting a 3D element into a photograph (or a movie). The original photo or movie is called the plate. If the element is going to be behind anything in the plate, whatever is in the foreground has to be separated from everything else, and put on top. This isn't exactly trivial when it's something like grass or a tree moving in the wind. Those things can be shot with a bluescreen and layered on top later as well.

    The tools are somewhat easy to learn, but the art of compositing is very difficult to master. The goal is to get it looking like everything was right there in front of the camera when the plate was shot. An extremely good eye for anything that is off is required. It's really easy to tell taht something is off in a composit, but it is often very difficult to pinpoint what. Master compositors know tons of tricks for faking something or cheating something, and the best compositing work is what you don't actually notice.



    So theres a bit more to CGI than having a computer generate something for you. A computer is an invaluable tool, but it is only that, a tool. A big electric pencil. Making a realistic human, for example, takes a lot of talent in a lot of different areas. Countless hours spent by large numbers of artists are needed to make it happen, along with the technical brilliance of the programmers.

  • Log in or register to write something here or to contact authors.