Tele-Immersion is an immensely cool idea, that has potentially limitless applications. It ties in elements of Virtual Reality and Augmented Reality. At the time of writing, the main stumbling block is that it might actually by theoretically impossible to shift the volume of data around that would be required for a convincing realtime stereoscopic projection. Some elements of existing and hypothetical prototype systems under development by the US government are :

Firstly, full 3D imaging and scanning : every surface within the source area has to be scanned at a very fine level of detail. One of the tools used is bouncing points of light around faster than the eye can see and/or outside of the visual spectrum. As well as the shape, colour and texture of surfaces, many other factors have to be recorded and reconstructed in realtime. As people perceive subtle movements and details of the human face and hands much more acutely than say, in an inanimate carbon rod, specialised subsystems have to be developed to concentrate on making these areas accurate and consistent.

Once all the data is captured, it has to be encoded and piped to the destination, requiring rather more bandwidth than is currently available. Then the projection has to be generated, using a rather large number of projectors and mirrors, while scanning the observer's eye movements (all the observers that is ... some kind of lightweight glasses are required to house the sensors practically). The framerate can be chopped up enough to cater for everone's POV, and shutters in the glasses are synced to that particular timeslice. As an added party trick, the system can remove the model of the glasses from the projection that is sent (in fact, many trivial manipulations to the model can be incorporated ... very holodeck).

The basic model for a T-I room would be that one half of the physical room is taken up by the system, and when it is in operation, the virtual room created is effectively the two "real" rooms coupled together. AR objects can be passed between the two parties, employing haptic feedback.

Research and development of tele-immersion--the simulation of a common environment for multiple communicators who are, in fact, in different locations--has proceeded rapidly since the process was first described here in early 2000.

It was May 9, 2000, when Jaron Lanier (often described as “the father of virtual reality”) and his team demonstrated tele-immersion's feasibility by bringing together three disparate locations and participants in a “virtual” three-dimensional office-of-tomorrow. Speed and the overall quality of the images transmitted have improved greatly since then.

The most significant addition to the concept, however, came from researchers at Brown University, led by Andries van Dam, in October of 2000. A miniature office interior, about two feet wide, was placed upon a virtual desktop and manipulated, three-dimensionally, collaboratively, in real-time.

Faster processors and an enormous increase in bandwidth have allowed scientists to proceed with practical tele-immersion in the following (greatly-simplified) manner:

An array of cameras is used to view people and their surroundings from different angles. These cameras can be hidden, mounted in the ceiling, or mounted behind tiny perforations in the screen the user is viewing

The scene is illuminated by imperceptibly structured light. The light looks “normal,” but is in fact made up of brief flickering patterns that help the computers make sense of subtle differences in imagery.

Each camera’s set of images at a given moment is sorted into subsets of trios of images that overlap.

From each trio of images, a “disparity map” is calculated, which reflects the amount of variation among the images at all points. The disparities are analyzed and combined into a “bas relief” depth map of the scene.

All the depth maps are then combined into a single viewpoint-independent sculptural model of the scene at a particular moment.

At this time, final stereoscopic images are polarized and the viewer must wear glasses in order to perceive three-dimensionality. In the future “autostereoscopic” displays will channel images to each eye separately, and glasses will not be needed.

Jaron Lanier writes in the April, 2001 Scientific American:

“Roughly speaking, tele-immersion is about 100 times too expensive to compete with other communications techniques right now and needs more polishing besides. My best guess is that it will be good enough and cheap enough for limited introduction in approximately five years and for widespread use in around 10 years.”
But when it arrives, tele-immersion promises to once again change the way we do business and seek pleasure. Engineers will collaborate over thousands of miles on machines and structures that do not even exist in reality. Archeologists will be “virtually” present at the moment of important new discoveries. Some even predict that tele-immersion will take the place of air travel in the not-that-distant future.

New art forms will evolve, as will new problems. It doesn’t take much imagination to predict how tele-immersion might devolve on its way to ubiquity. Jaron Lanier writes:

“I am often asked if it is frightening to work on new technologies that are likely to have a profound impact on society without being able to know what that impact will be. My answer is that because tele-immersion is fundamentally a tool to help people connect better, the question is really about human nature. I believe that communications technologies increase the opportunities for empathy and thus for moral behavior. Consequently, I am optimistic that whatever role tele-immersion ultimately takes on, it will mostly be for the good.”

Tele-immersion Team Members

Scientific American, Volume 284, Number 4, April 2001

Log in or register to write something here or to contact authors.