When you stare at something, be it a face, the keyboard, some text, whatever, you foveate. That is to say, your eyes are not at rest on the object, but are instead constantly moving, analyzing the individual corners and edges that make up what you're looking at. Vision is only 100% accurate on a 110 square millimeter area (known as the central retina) of your working retina, which is 1400 square millimeters in area. When you're looking at something, whether up close or far away, your eyes are sweeping your central retina over all of its important parts to acquire the whole picture. Individual stops on visual features are called fixation pauses, and generally last between 30 and 90 ms.

Foveation is reflexive, and you can perceive it consciously if you look at something and pay attention to exactly what your eyes are doing. It can also be consciously stopped, by concentrating on a small enough detail (nothing larger than, say, three millimeters in diameter) without breaking focus. A malfunction in the foveation reflex leads to nystagmus, and underfoveation may cause to a state of tunnel vision.

A nucleus in the midbrain, the superior colliculus, is responsible for the coordination of foveation and other visual reflexes. In lower animals, vision is primarily interpreted by the superior colliculus, and it's responsible for most of the visually mediated responses -- a frog shooting its tongue to catch a fly, or a school of fish scattering as a predator comes into view. In the higher animals, humans included, vision is routed through the thalamus and eventually into the cerebral cortex for processing, leaving the SC to reflexively control foveation and looking towards movement.

Taking advantage of the eye's tendency to foveate may lead to a new form of high-quality video compression. The idea is to invent an algorithm that could determine which parts of a frame were going to be taken in first by foveation, and leave them high-quality while compressing the rest of the frame. This is analogous to mp3 compression in that it compresses most the information that is least noticeable. Low level vision, however, uses much more cognitive resources than low-level hearing, and is exponentially harder to model with an algorithm. While there are a few competing psychoacoustic models to use in compression into an mp3, there aren't any well developed and accepted psychovisual (?) models to use for foveation based compression. With bandwidth becoming ever cheaper and more ubiquitous, it's probably a moot point anyway.