Você está na página 1de 30

1.

INTRODUCTION
Tele-immersion, a new medium for human interaction enabled by digital technologies, approximates the illusion that a user is in the same physical space as other people, even through the other participants might in fact be hundreds or thousands of miles away. It combines the display and interaction techniques of virtual reality with new vision technologies that transcend the traditional limitations of a camera. Rather than merely observing people and their immediate environment from one vantage point, tele- immersion stations convey them as "moving sculptures," without favoring a single point of view. The result is that all the participants, however distant, can share and explore a life-size space. Tele immersion is a technology that will be implemented with internet2 it will enable users in different geographic locations to come together and interact in a simulated holographic environment. Users will feel as if they are actually looking, talking and meeting with each other face to face in the same place, even though they may be miles apart physically. In a tele immersive environment, computer recognized the presence and movements of individuals as well as physical and virtual objects. They can then track these people and nonliving objects, and project them in a realistic way across many geographic locations. The three steps to constructing a holographic environment are:

The computer recognizes the presence and movements of people and objects. The computer tracks those images. The computer projects those images on a stereo immersive surface. 3D reconstruction for tele immersion is performed using stereo, which mean two or

more cameras rapid sequential shots of the same objects, continuously performing distance calculations, and projecting them into the computer. Simulated environment to replicate real time movements. By combining cameras and Internet telephony, video conferencing has allowed real time exchange of more information than ever, without physically bringing each person into one central room. Beyond improving on videoconferencing, tele-immersion was conceived as an ideal application for driving network-engineering research, specifically for Internet^, the primary research consortium for advanced network studies in the U.S. If a computer network can support tele-immersion, it can probably support any other application.

2.THE HISTORY
It was way back in 1965 that the great pioneer of computer graphics, Ivan Sutherland, proposed the concept of the ultimate display. It described a graphics display that would allow the user to experience a completely computer rendered environment. In 1998, Abilene, a backbone research project, was launched and now serves as a base for Internet2 research. Internet2 needed an application that would challenge and stretch its networks capabilities. The head of advanced network and services proposed teleimmersion at the application that could drive internet2 research forward. That is how the national teleimmersion initiative as formed in may2000, researchers at the Universities of North Carolina (UNC), the Universities of Pennsylvania and advanced network and services reached a milestone in developing this technology. A user sitting in an office at UNC in Chapel Hill, NC, was able to see life like, 3D images of colleagues hundreds of miles away, one in Philadelphia and the other in New York. Today scientists are still developing this new communication technology. There are several groups working together on the national teleimmersion initiative (NTII) to make this wonderful technology available to the common men.

3.WHAT IS TELE- IMMERSION?


Tele-immersion enables users at geographically distributed sites to collaborate in real time in a shared, simulated, hybrid environment as if they were in the same physical room. It is the ultimate synthesis of media technologies: 3D environment scanning. Projective and display technologies. Tracking technologies. Audio technologies. Powerful networking. The considerable requirements for tele-immersion system, such as high bandwidth, low latency and low latency variation make it one of the most challenging net applications. This application is therefore considered to be an ideal driver for the research agendas of the Intern et2 community. Tele-immersion is that sense of shared presence with distant individuals and their environments that feels substantially as if they were in one's own local space. This kind of tele-immersion differs significantly from conventional video teleconferencing in that the use's view of the remote environment changes dynamically as he moves his head.

3.1.VIDEOCONFERENCING VS TELE-IMMERSION
Human interaction has both verbal and nonverbal elements, and videoconferencing seems precisely configured to confound the nonverbal ones. It is impossible to make eye contact perfectly, for instance, in todays videoconferencing systems, because the camera and the display screen cannot be in the same spot. This usually leads to a deadened and formal affect in interactions, eye contact being a nearly ubiquitous subconscious method of affirming trust. Furthermore, participants arent able to establish a sense of position relative to one another and therefore have no clear way to direct attention, approval or disapproval. Tele-immersion is an improved version of digital technology. We can make an eye contact, which will give a feeling of trust. This approximates the illusion that a user is in the same physical space as other people, even though they may be far apart. Here rather than merely observing the people and their immediate environment from one vantage point, tele-immersion stations convey them as moving sculptures, without favoring a single point of view. They can share a life cycle space. They are able to convey emotions in its right intensity. A three dimensional view of the room is obtained. It can simulate shared models also.

3.2.NEW CONCEPTS AND CHALLENGES


In a tele-immersive environment computers recognize the presence and movements of individuals and both physical and virtual objects, track those individuals and objects, and project them in realistic, multiple, geographically distributed immersive environments on stereo-immersive surfaces. This requires sampling and resynthesis of the physical environment as well as the users faces and bodies, which is a new challenge that will move the range of emerging technologies, such as scene depth extraction and warp rendering, to the next level. Tele-immersive environments will therefore facilitate not only interaction between users themselves but also between users and computer-generated models and simulations. This will require expanding the boundaries of computer vision, tracking, display, and rendering technologies. As a result, all of this will enable users to achieve a compelling experience and it will lay the groundwork for a higher degree of their inclusion into the entire system. In order to fully utilize the tele-immersion, we need to provide interaction that is both as seamless as real world but allows even more effective communication. For example, in the real world someone involved in a meeting might draw a picture on a paper and then show the paper to the other people in the meeting. In tele-immersion spaces people have the opportunity to communicate in fundamentally new ways.

4.REQUIREMENTS OF TELE-IMMERSION
Tele-immersion is the ultimate synthesis of media technologies. It needs the best out of every media technology. The requirements are given below.

4.1.3D ENVIRONMENT SCANNING


For a better exploring of the environment a stereoscopic view is required. For this, a mechanism for 3D environment scanning method is to be used. It is by using multiple cameras for producing two separate images for each of eyes. By using polarized glasses we can separate each of the views and get a 3D view. The key is that in tele-immersion, each participant must have a personal view point of remote scenes-in fact, two of them, because each eye must see from its own perspective to preserve a sense of depth. Furthermore, participants should be free to move about, so each person's perspective will be in constant motion. Tele-immersion demands that each scene be sensed in a manner that is not biased toward any particular viewpoint (a camera, in contrast, is locked into portraying a scene from its own position). Each place, and the people and things in it, has to be sensed from all directions at once and conveyed as if it were an animated three-dimensional sculpture. Each remote site receives information describing the whole moving sculpture and renders viewpoints as needed locally. The scanning process has to be accomplished fast enough to take place in real time at most within a small fraction of a second. The sculpture representing a person can then be updated quickly enough to achieve the illusion of continuous motion. This illusion starts to appear at about 12.5 frames per second (fps) but becomes robust at about 25 fps and better still at faster rates. Measuring the moving three-dimensional contours of the inhabitants of a room and its other contents can be accomplished in a variety of ways. In 1993, Henry Fuchs of the University of North Carolina at Chapel Hill had proposed one method, known as the "sea of cameras" approach, in which the viewpoints of many cameras are compared. In typical scenes in a human environment, there will tend to be visual features, such as a fold in a sweater, that are visible to more than one camera. By comparing the angle at which these features are seen by different cameras, algorithms can piece together a three- dimensional model of the scene. This technique had been explored in non-real-time configurations, which later culminated in the "Virtualized Reality "demonstration at Carnegie Mellon University, reported in 1995. That setup consisted of 51 inward-looking cameras mounted on a geodesic dome. Because it was not a real - time device, it could not be used for teleimmersion.Ruzena Bajcsy, head of GRASP ( General Robotics, Automation, Sensing and Perception ) Laboratory at the University of Pennsylvania, was intrigued by the idea of real-time seas of cameras. Starting in 1994, small scale "puddles" of two or three cameras to gather real-world data for virtual - reality applications was introduced.

But a sea of cameras in itself isn't complete solution. Suppose a sea of cameras is looking at a clean white wall. Because there are no surface futures, the cameras have no information with which to build a sculptural model. A person can look at a white wall without being confused. Humans don't worry that a wall might actually be a passage to an infinitely deep white chasm, because we don't rely on geometric cues alone - we also have a model of a room in our minds that can rein in errant mental interpretations. Unfortunately, to today's digital cameras, a person's forehead or T-shirt can present the same challenge as a white wall, and today's software isn't smart enough to undo the confusion that results. Researchers at Chapel Hill came with a novel method that has shown promise for overcoming this obstacle, called " imperceptible structured light or ISL. Conventional light bulbs flicker 50 or 60 times a second, fast enough for the flickering to be generally invisible to the human eye. Similarly, ISL appears to the human eye as a continuous source of white light, like an ordinary light bulb, but in fact it is filled with quickly changing patterns visible only to specialized, carefully synchronized cameras. These patterns fill in voids such as white wall with imposed features that allow a sea of cameras to complete the measurements. If imperceptible structured light is not used, then there may be holes in reconstruction data that result from occlutions, areas that aren't seen by enough cameras, or areas that don't provide distinguishing surface features. To accomplish the simultaneous capture and display an office of the future is envisioned where ceiling lights are controlled cameras and "smart" projectors that are used to capture dynamic image-based models with imperceptible structured light techniques, and to display high-resolution images on designated display surfaces. By doing simultaneously on the designated display surfaces, one can dynamically adjust or auto calibrate for geometric, intensity, and resolution variations resulting from irregular or changing display surfaces, or overlapped projector images. Now the current approach to dynamic image-based modeling is to use an optimized structured light scheme that can capture per-pixel depth and reflectance at interactive rates. The approach to rendering on the designated (potentially irregular) display surface is to employ a two-pass projective texture scheme to generate images that when projected onto the surfaces appear correct to a moving head-tracked observer.

Image processing
At the transmitting end, the 3d image scanned is generated using two techniques: Shared table approach Here, the depth of the 3d image is calculated using 3d wire frames. this technique uses various camera views and complex image analysis algorithms to calculate the depth. Ic3d (incomplete 3d) approach In this case, a common texture surface is extracted from the available camera views and the depth information is coded in an associated disparity map. This representation can be encoded into a mpeg-4 video object, which is then transmitted.

Fig:1 Left and Right Camera View of Stero Test Sequence

Fig:2 Texture and Disparity Maps Extracts from Stero Test Sequence

4.2.Reconstruction in a holographic environment


The process of reconstruction of image occurs in a holographic environment. The reconstruction process is different for shared table and ic3d approach. Shared Table Approach Assuming that the geometrical parameters of the multi-view the virtual scene and the virtual camera are well ensured that the scene is viewed in changing the viewing position. Ic3d Approach The decoded disparities are scaled according to the users 3d viewpoint in the virtual scene, and a disparity-controlled projection is carried out. The 3d perspective of the person changes with the movement of the virtual camera In both the approaches, at the receiving end the entirely composed 3d scene is rendered onto the 2d display of the terminal by using a virtual camera. the position of the virtual camera coincides with the current position of the conferee's head. for this purpose the head position is permanently registered by a head tracker and the virtual camera is moved with the head. Components of a Holographic Environment: Tele immersive displays of earlier days required user to wear special goggles and a head device that tracked the view point of the user looking at the screen. At the other end, the people, who appeared as 3d image, where tracked with and array of 8 ordinary video cameras while three other video cameras captured real life patterns projected in each room to calculate distance. This enabled the proper depth to the recreated on the screen. So if an observer move here head to the left, she could see the corresponding image that would be seen if she were actually in the room with the person on the screen. Scientists are developing new technologies support this type of communication. Apart of these new technologies is: Tele cubical Users will communicate by using this technology. It consist of a Stereo immersive desk surface and two stereoimmersive wall surfaces. These three display surfaces join to the capture device, fitted to each other, it is

right perspective view, even while

form a virtual conference table in the centre. This will allow the realistic inclusion of teleimmersion into the work environment, as it will take up the usual amount of desk space. Internet2 This will replace the current internet infrastructure. It is a consortium made up of the US government, industry and academia (180universities) that has been formed for creating tomorrows internet. This new network will have a higher bandwidth and speeds that are 1000 times faster than todays internet. This high bandwidth, high speed network is necessary to transfer the large amounts of data that teleimmersion will produce. Bandwidth issues Network bandwidth required to make teleimmersion work is one of the main concerns of this new technology. It is estimated that as much as 1.2 gigabits per sec will be needed for future high quality effects. This is much higher than the average home connection bandwidth. The exact amount of bandwidth needed for each scene depends on the complexity of the background. With time, the number of megabits used will fall as advanced compression techniques are established. Currently, the last mile of network connections for top computer science departments in US use an OC3 line. This can carry 155 megabits per second and supports, at a basic level, a three way conversation. Although OC3 lines are 100 times faster than normal broadband, they are also more expensive.

Fig-Bandwidth over time bandwidth utilization graph Initially, bandwidth-intensive application will have to be limited to the larger organizations that can afford high connection speeds. The amount of data sent to render

this telepresence will also require fast processing power. This will need to be available as required on the internet. A new network called the Grid could be a solution. The Grid will use distributed computing. There are not enough supercomputers to ideal with the enormous amount of data that will rush through the net in the future. As a solution, new network will connect their PCs so they can share processing power and hard disk space. They will be locked into a grid-effectively creating one supercomputer. Display technologies Stereo immersive displays would have to present a clear view of the scenes being transmitted. Haptic sensors It would allow to touch projections as if they were real. Desktop supercomputers It Would perform the trillions of calculations needed to create a holographic environment. A network of computers that share power could also possibly support these environments.

4.2.Projective & display technologies


By using tele-immersion a user must feel that he is immersed in the other persons world. For this, a projected view of the other users world is needed. For producing a projected view, big screen is needed. For better projection, the screen must be curved and special projection cameras are to be used.

4.3.Tracking technologies
It is great necessity that each of the objects in the immersive environment be tracked so that we get a real world experience. This is done by tracking the movement of the user and adjusting the camera accordingly. Moving Sculptures It combines the display and interaction techniques of virtual reality with new vision technologies that transcend the traditional limitations of a camera. Rather than merely observing people and their immediate environment from one vantage point, tele-immersion stations convey them as moving sculptures, without favoring a single point of view. The result is that all the participants, however distant, can share and explore a life size space. Head & Hand tracking The UNC and Utah sites collaborated on several joint design-and-manufacture efforts, including the design and rapid production of a head-tracker component (HiBall) (now used in the experimental UNC wide-area ceiling tracker). Precise, unencumbered tracking of a users head and hands over a room sized working area has been an elusive goal in modern technology and the weak link in most virtual reality systems. Currant commercial offerings based on magnetic technologies perform poorly around such

ubiquitous, magnetically noisy computer components as CRTs, while optical-based products have a very small working volume and illuminated beacon targets (LEDs). Lack of an effective tracker has crippled a host of augmented reality applications in which the users views of the local surroundings are augmented by synthetic data (e.g., location of a tumor in the patients breast or the removal path of a part from within a complicated piece of machinery). 4.4.Audio technologies For true immersive effect the audio system has to be extended to another dimension, i.e., a 3D sound capturing and reproduction method has to be used. This is necessary to track each sound sources relative position. 4.5.Powerful networking If a computer network can support tele-immersion it can probably support any other application. This is because tele-immersion demands as little delay as possible from flows of information (and as little inconsistency in delay), in addition to the more common demands for very large and reliable flows. The considerable requirements for teleimmersion system, such as high bandwidth, low latency and low variation (jitter), make it one of the most challenging net applications.

Internet 2 the driving force behind Tele-immersion It is the next generation internet. Tele-immersion was conceived as ideal application

for driving network engineering research. Internet2 is a consortium consisting of the US government, industries and around 200 universities and colleges. It has high bandwidth and speed. It enables revolutionary internet applications. Need for speed: If a computer network can support tele-immersion, it can probably support any other application. This is because tele-immersion demands as little delay as possible from flows of information (and as little inconsistency in delay), in addition to the more common demands for very large and reliable flows. Strain to Network: In tele-immersion not only participants motion but also the entire surface of each participant had to sent. So it strained a network very strongly. Bandwidth is a crucial concern. Our demand for bandwidth varies with the scene and application; a more complex scene requires more bandwidth. Conveying a single person at a desk, without the surrounding room, at a slow frame rate of about two frames per second has proved to require around 20 megabits per second but with up to 8-megabit-per-second peaks.

Network backbone: A backbone is a network within a network that lets information travel over

exceptionally powerful, widely shared connections to go long distances more quickly. Some notable backbones designed to support research were the NSFnet in the late 1980s and the vBNS in the mid-1990s. Each of these played a part in inspiring new applications for the Internet, such as the World Wide Web. Another backbone research project, called Abilene, began in 1998, and it was to serve a university consortium called Internet2.Abilene now reaches more than 170 American research universities. If the only goal of Internet2 were to offer a high level of bandwidth (that is, a large number of bits per second), then the mere existence of Abilene and related resources be sufficient. But Internet2 research targeted additional goals, among them the development of new protocols for handling applications that demand very high bandwidth and very low, controlled latencies (delays imposed by processing signals en route). The last mile of network connection that runs into computer science departments currently tends to be an OC3 line, which can carry 155 megabits per second-just about right for sustaining a three-way conversation at a slow frame rate. But an OC3 line has approximately 100 times more capacity than what is usually considered a broadband connection now, and it is correspondingly more expensive. Computational Needs Beyond the scene-capture system, the principal components of a tele-immersion setup are the computers, the network services, and the display and interaction devices. Each of these components has been advanced in the cause of tele-immersion and must advance further. Tele-immersion is a voracious consumer of computer resources. Literally dozens of such processors are currently needed at each site to keep up with the demands of teleimmersion. Roughly speaking, a cluster of eight two-gigahertz Pentium processors with shared memory should be able to process a trio within a sea of cameras in approximately real time. Such processor clusters should be available in the later year. One promising avenue of exploration in the next few years will be routing teleimmersion processing through remote supercomputer centers in real time to gain access to superior computing power. In this case, a supercomputer will have to be fast enough to compensate for the extra delay caused by the travel time to and from its location. Bandwidth is a crucial concern. Our demand for bandwidth varies with the scene and application; a more complex scene requires more bandwidth. We can assume that much of the scene, particularly the background walls and such, is unchanging and does not need to be resent with each frame. Conveying a single person at a desk, without the surrounding room, at a slow frame rate of about two frames per second has proved to require around 20 megabits per second but with up to 80-megabit-per-second peaks. With time, however, that number will fall as better compression techniques become established. Each site must receive the streams from

all the others, so in a three-way conversation the bandwidth requirement must be multiplied accordingly. The last mile of network connection that runs into computer science departments currently tends to be an OC3 line, which can carry 155 megabits per secondjust about right for sustaining a three-way conversation at a slow frame rate. But on OC3 line is approximately 100 times more capacious than what is usually considered a broadband connection now, and it is correspondingly more expensive. Tele Cubicle The tele-cubicle represents the next generation immersive interface. It can also be seen as a subset of all possible immersive interfaces. An office appears as one quadrant in a larger shared virtual office space. The canvases onto which the imagery can be displayed are a stero-immersive desk surface as well as at least two stereo. Such a system represents the unification of Virtual Reality and videoconferencing, and it provides an opportunity for the full integration of VR into the workflow. Physical and virtual environments appear united for both input and display. This combination, we believe, offers a new paradigm for human communications and collaboration. RESULTS OF THE DEMO ON OCTOBER 2000 In the demo in October 2000, most of the confetti was gone and the overall quality and speed of the system had increased, but the most important improvement came from researchers at Brown University. Demonstration of unified system with 3D real time acquisition data (real data), 3D synthetic objects (virtual data) and user interactions with 3D objects using virtual laser pointer. The participants in the session are not only able to see each other in 3D but they were able to engage in collaborative work, here a simple example of interior office design. The remote site in the demo was Advanced Network & Services, Armonk, Ny, and local site where images were taken was at the University of North Carolina at Chapell Hill, NC. The data were sent over Internet2 links (Abilene-backbone) at the rate of 15-20 Mb/sec (no compression applied), 3D real time acquisition data combined with static 3D background and synthetic 3D graphics objects. For the interactive part we used magnetic tracker to mimic virtual laser pointer, as well as a mouse. All synthetic objects were either downloaded or created on the fly. Both users could move objects around the scene and collaborate in design process In between the two people are virtual objects (the furniture models). These are objects that dont come from either physical place. They can be created and manipulated on the fly-theres a deep architecture behind them (which was written at Brown University).

Fig 3 Three way video conferencing using Tele Immersion Tele cubicle which consists of two wall surfaces and a desk surface which projects 3D images.It consists of a stereo immersive desk surface and two stereo-immersive wall surfaces. These three display surfaces join to form a corner desk unit. The walls appear as windows to the other users' environment while the desks join together to form a virtual conference table in the centre. This will allow the realistic inclusion of tele-immersion into the work environment, as it will take up the usual amount of desk space. Today's tele-immersion combines the superior display of CAVE and ImmersaDesk display systems with advanced network capabilities .The CAVE (Cave Automatic Virtual Environment) is a multi-display virtual reality device comprised of three projection screen,two "walls" and a "floor" which projects real-time images in response to the user's eye and/or head movements. To ensure the quality of the picture and timeliness of response, the CAVE must be controlled by a powerful machine or supercomputer. In some cases, CAVE processing units contain up to sixteen processors .CAVE is an example of 3D disply system which implements Telecubicle.

TELE IMMERSION STUDIO


It's a room with an array of video cameras to provide multiple viewpoints and a group of computers to process the digitized images. The people, who appeared as 3-D images, were tracked with an array of eight ordinary video cameras while three other video cameras captured real light patterns in room to calculate distances. This enables the proper depth to be recreated on the 3 - D space. In a remote location, a viewer sits in front of a screen, wearing polarized glasses like those used for 3-D movies. The screen shows what or who is in front of the array of video cameras. If the observer moved his or her head to the 'eft. he/she could see the corresponding images that would be seen if she were actually in :he room with the person on the screen.

5.How Tele immersion Works?


In this simplified scheme for how a future teleimmersion scheme might work, two partners separated by 1,000 miles collaborate on a new engine design.

Following the flow of information teleimmersion depends on intense data processing at each end of a connection, mediated by high performance network. From the sender Parallel processors accept visual inputs from the cameras and reinterpret the scene as a 3-Dimensional computer model.

To the receiver Specific rendering of remote people and places are synthesized from the model as it is received to match the point of view of each eye of a user. The whole process repeats many times a second to keep up with the user head motion. Generating the 3-D Image 1.An array of cameras views people and their surroundings from different angle . Each camera generates an image from its point of view many times in a second.

Array of cameras 2. Each set of the images taken at a given instant is sorted into subsets of overlapping trios of images

Views taken by cameras 3.From each trio of images, a disparity map is calculated, reflecting the degree of variation among the images at all points in the Disparity map visual field. The disparities are then analyzed to yield depths that would account for the differences between what each camera sees. These depth values are combined into a bas relief depth map of the scene.

4. all the depth maps are combined into a single viewpoint independent sculptural model of the scene at a given moment. Process of combining the depth maps providemaps opportunities.

Final View for removing spurious points and noise

Teleimmersion and Virtual Reality Teleimmersion may sound like virtual reality but there are major differences between the two technologies. While virtual reality allows you to move in a computergenerated 3-D environment, teleimmersion can only create a 3d environment that you can see but not interact with. However, interaction is possible by combining the two technologies.

6.APPLICATIONS
Teleimmersive holographic environment have a number of applications. Imagine a video game free of joysticks, in which you become a participant in the game, fighting monsters or scoring touchdowns. Instead of traveling hundreds of miles to visit your relatives during the holidays, you can simply call them up and join them in a shared holographic room. Doctors and Soldiers could use teleimmersion to train in a simulated environment. Building inspectors could tour structures without living their desks. Automobile designers from different continents could meet to develop the next generation of vehicles. Surgeons indifferent geographical space could experiment with virtual medical procedures before working on actual patients. Medical technologies that are physically inaccessible in some places could be used to save lives by manipulating virtual models, instance, offshore oilrigs and ships. In the entertainment industry, ballroom dancers could train together from separate physical spaces. Instead of commuting to work for a board meeting, business persons could attend it by projecting themselves into the conference room. The list of applications is large and varied, and one thing is crystal clear- this technology will significantly affect the educational, scientific and medical sectors. 1) Collaborative Engineering Works Teams of engineers might collaborate at great distances on computerized designs for new machines that can be tinkered with as through they were real models on a shared workbench. Archaeologists from around the world might experience being present during a crucial dig. Rarefied experts in building inspection or engine repair might be able to visit locations without losing time to air travel. 2) Video Conferencing Although few would claim that tele-immersion will be absolutely as good as "being there" in the near term, it might be good enough for business meetings, professional consultations, training sessions, trade show exhibits and the like. Business travel might be replaced to a significant degree by tele-immersion in 10 years. This is not only because tele-immersion will become better and cheaper but because air travel will face limits to growth because of safety, land use and environmental concerns. 3) Immersive Electronic Book Applications of tele-immersion will include immersive electronic books that in effect blend a "time machine" with 3D hypermedia, to add an additional important dimension, that of being able to record experiences in witch a viewer, immersed in the 3D reconstruction, can literally walk through the scene or move backward and forward in time. While there are many potential application areas for such novel technologies (e.g., design and virtual prototyping, maintenance and repair, paleontological and archaeological reconstruction), the focus here will be on a socially important and technologically challenging driving application, teaching surgical management of difficult, potentially lethal, injuries.

4) Collaborative mechanical CAD A group of designers will be able to collaborate from remote sites in an interactive design process. They will be able to manipulate a virtual model starting from the conceptual design, review and discuss the design at each stage, perform desired evaluation and simulation, and even finish off the cycle with the production of the concrete part on the milling machines. 5) Entertainment Tele-immersive holographic environments have a number of applications. Imagine a video game free of joysticks, in which you become a participant in the game, fighting monsters or scoring touchdowns. 6) Live chat Instead of traveling hundreds of miles to visit your relatives during the holidays, you can simply call them up and join them in a shared holographic room. 7) Medicine Tele immersion can be of immense use to the field of medicine. The way medicine is taught and practiced has always been very hands-on. It is impossible to treat a patient over the phone or give instructions for a tumour to be removed without physically being there. With the help of tele-immersion. 3D surgical learning for virtual operations is now in place and, in the future, the hope is to be able to carry out real surgery on real patients.

A geographically distanced surgeon could be tele-immersed into an operation theatre to perform an operation. This could potentially be lifesaving if the patient is in need of special care (either a technique or a piece of equipment), which is not available at that particular location.Tele-immersion 'will give surgeonsthe ability to superimpose anatomic images right on their patients while they are being operated on'.

8) Uses in education In education, tele-immersion can be used to bring together students at remote sites in a single environment. Relationships among educational institutions could improve tremendously in the future with the use of tele-immersion. Already, the academic world is sharing information on research and development to better the end results. Doctors and soldiers could use tele-immersion to train in a simulated environment. This will be a distinct advantage in surgical training. While it will not replace the hands-on training, this technology will give surgeons a chance to | I learn complex situations before they treat their patients. With teleimmersion in schools, students could have access to data or control a telescope from a remote location, or meet with students from other countries by projecting themselves into a foreign space. Internet2 will provide access to digital libraries and virtual labs, opening up the lilies of communication for students. Tele-immersion will bring to them places, equipment and situations earlier not available, helping them experience what they could have only watched, read or heard about earlier.

9) Future office In years to come, instead of asking for a colleague on the phone you will find it easier to instruct your computer to find him or her. Once you do that, you'll probably see a flicker on one of your office walls and find that your colleague, who's physically present in another city, is sitting right across you as if he or she is right there. The person at the other end will experience the same immersive connection. With teleimmersion bringing two or more distant people together in a single, simulated office setting, business travel will become quite redundant. Video conferencing via internet is not a perfect form of communication. The image is closed to real time but there are delays that cause distorted video. Also, if someone walks out of the view of a camera the person is no longer visible. However, with tele immersion, people will always remain in view of the camera and you will be able to look around their office just by looking at the display screen from different angles. Tele immersion takes video conferencing to a higher level it is a dynamic concept, which will transform the way humans interact with each other and the world in general.

Other applications

Building inspectors could tour structures without leaving their desks. Automobile designers from different continents could meet to develop the next generation of vehicles. In the entertainment industry, ballroom dancers could train together from separate physical spaces. Instead of commuting to work for a board meeting, businesspersons could attend it by projecting themselves into the conference room. The list of applications is large and varied, and one thing is crystal clear this technology will significantly affect the educational, scientific and medical sectors.

7.CHALLENGES OF TELE-IMMERSION

Tele-immersion has emerged as a high-end driver for die Quality of Service (QoS), bandwidth, and reservation efforts envisioned by the NGI and lnternet2 leadership. From a networking perspective, tele-immersion is a very challenging technology for several reasons. The networks must be in place and tuned to support high-bandwidth applications. Low latency, needed for 2-way collaboration, is hard to specify and guarantee given current middleware. The speed of light in fiber itself is a limiting factor over transcontinental and transoceanic distances. Multicast, unicast, reliable and unreliable data transmissions (called flows) need to be provided for and managed by the networks and the operating systems of supercomputer-class workstations. . Real-time considerations for video and audio reconstruction (streaming) are critical to achieving the feel of telepresence, whether synchronous or recorded and played back The computers, too, are bandwidth limited with regard to handling very large data for collaboration Simulation and data mining are open-ended in computational and bandwidth needs there will never be quite enough computing and bits/second to fully analyze, and simulate reality for scientific purposes. In Laymans language the realization of tele-immersion is impossible today due to 1. 2. 3. The non-availability of high speed networks The non-availability of supercomputers Large network bandwidth requirement reasons

7.1.SOLUTION

The first two basic problems can be overcome when lnternet-2 will come into picture later and third problem can be overcome by the fast development of image compression techniques. ABOUT INTERNET-2 lnternet2 is not a separate physical network and will not replace the current Internet. It is not for profit consortium consisting of 200 US universities. Industries and is directly under the control of US govt.. Internet2 is for developing and deploying advanced network applications and technology, accelerating the creation of tomorrow's Internet. Internet2 enables completely new applications such as digital libraries, virtual laboratories, distance-independent learning and tele-immersion. A key goal of this effort is to accelerate the diffusion of advanced Internet technology, in particular into the commercial sector. Internet2 is the second generation internet, helps to develop advanced network applications and technologies for research and higher education, by recreating the partnerships among academia, industry, and government. Another backbone research project, called Abilene, begun in 1998, and it was to serve Internet2. Abilene now reaches more than 170 American research universities. Internet^ research targeted in the development of new protocols for handling applications that demand very high bandwidth and very low, controlled latencies (delay is reduced by processing signals along their travel through the network). We need a powerful network with high speed and high bandwidth to transfer the large amounts of data that tele-immersion will produce.Internet2 will replace the current Internet infrastructure. This new network will have a higher bandwidth and speeds that are 1000 times faster than today's Internet. This high-bandwidth, high-speed provided by lnternet2 is sufficient to transfer the large amounts of data that tele-immersion will produce. Internet 2 had a peculiar problem : no existing applications that requires the high level of performance provided by internet 2 except teleimmersion

Desktop supercomputers The Grid will use distributed computing. There are not enough supercomputers to deal with the enormous amounts of data that will rush through the Net in the future. As a solution, new networks will connect their PCs so they can share processing power and hard disk space. They will be locked in to a grideffectively creating one supercomputer. About a dozen American universities are doing research on various aspects of immersive technologies, including USC, the University of North Carolina, the University of Pennsylvania and Brown University Mainly two institutions called PENN and UNC (University of north Carolina) are doing researches in tele immersion. GRID

To solve the problem of supercomputers, something in the form of a network called the Grid has been developed. The system has been tested on the Internet2, the broadband version of the Internet for transmitting high volumes of data. These would perform the trillions of calculations needed to create a holographic environment. A network of computers that share power could also possibly support these environments Bandwidth issues Network bandwidth required to make tele-immersion work is one of the main concerns of this new technology. It is estimated that as much as 1.2 gigabits per second will be needed for future high-quality effects. This is much higher than the average home connection bandwidth. The exact amount of bandwidth needed for each scene depends on the complexity of the background. With time, the number of megabits used for transmitting a scene will reduce as advanced compression techniques are established. Initially, bandwidth-intensive applications will have to be limited to the larger organizations that can afford high connection speeds tele immersion.

8.CURRENT DEVELOPMENTS
Haptic sensors: Miniaturized force/torque sensors

There is an increasing need for measuring forces acting between human hands and the environment. External finger forces are measured by placing force sensing pads at the fingertips. A wide variety of such pads have been developed in the past for applications in robotics and medicine, using resistive, capacitive, piezoelectric, or optical elements to detect force. A critical problem with these force sensors is that they are often bulky and inevitably deteriorate the human's haptic sense, since the fingers cannot directly touch the environment surface. Recently, much research has focused on reducing this problem by inventing thinner and more flexible force-sensing pads, a new approach to the detection of finger forces is presented in order to completely eliminate any impediment to the natural haptic sense and hence the name 'haptic sensors'. (Haptic means that ' relating to or based on the sense of touch ' ). An optical sensor mounted on the fingernail " detects the force. This allows the human to touch the environment with bare fingers and perform fine, delicate tasks using the full range of haptic sense. Miniaturized optical components and circuitry allow the sensor to be disguised as a decorative fingernail covering. Haptic sensor is a new type of touch sensor for detecting contact pressure at human fingertips. Hence the sensor is mounted on the fingernail rather San on the fingertip. Specifically, the fingernail is instrumented with miniature light emitting diodes (LEDs) and photo detectors in order to measure changes in the reflection intensity when the fingertip is pressed against a surface. The changes in intensity are then used to determine changes in the blood volume under the fingernail, a technique termed "reflectance photoplethysmography." A homodynamtc model is used to investigate the dynamics of the blood volume at two locations under the fingernail. A miniaturized prototype nail sensor is de-signed, built, and tested. The theoretical analysis is verified through experiment and simulation.

Fig :Implementation of fingernail touch sensors

Figure shows the implementation of fingernail touch sensors. For the prototype shown here, two photodiode arrays of dimension 4 mm 1 mm are attached end to end on the bottom side. Up to 8 of the 32 total photodiodes can be wired up at once, resulting in up

to eight sensing locations along the length of the fingernail. Up to three LEDs of dimension 0.25 mm 0.25 mm can be placed in flexible locations beside the photodiode arrays. Haptic sensors would allow people to touch projections as if they were real. A 3D sensor and supporting software has been developed and patented that enables the real-time visualization of the haptic sense of pressure. Haptic sensors can be used in tele immersion systems to sense the pressure and reconstruct the feeling of touch in combination with other devices

9.CONCLUSION
When tele-immersion becomes commonplace, it will probably enable a wide variety of important applications. Teams of engineers might collaborate at great distances on computerized designs for new machines that can be tinkered with as though they were

real models on a shared workbench. Archaeologists from around the world might experience being present during a crucial dig. Rarefied experts in building inspection or engine repair might be able to visit locations without losing time to air travel. Tele-Immersion is a technology that is certainly going to bring a new revolution in the world and let us all hope that this technology reaches the world in its full flow as quickly as possible. In fact, tele-immersion might come to be seen as real competition for air travel-unlike videoconferencing. Although few would claim that tele-immersion will be absolutely as good as "being there" in the near term, it might be good enough for business meetings, professional consultations, training sessions, trade show exhibits and the like. Business travel might be replaced to a significant degree by tele-immersion in 10 years. This is not only because tele-immersion will become better and cheaper but because air travel will face limits to growth because of safety, land use and environmental concerns. Undoubtedly tele-immersion will pose new challenges as well. Some early users have expressed a concern that tele-immersion exposes too much, that telephones and videoconferencing tools make it easier for participants to control their exposure--to put the phone down or move off-screen. We are hopeful that with experience we will discover both user-interface designs and conventions of behavior that address such potential problems

10.FUTURE SCOPE
The tele-immersion system of future would ideally

Support one or more flat panels/projectors with ultra-high color resolution (say 5000x5000) Be stereo capable without special glasses Have several built-in micro-cameras and microphones Have tether-less, low-latency, high-accuracy tracking Network to teraflop computing via multi-gigabit optical switches with low latency Have exquisite directional sound capability Be available in a range of compatible hardware and software configurations Have gaze-directed or gesture-directed variable resolution and quality of rendering Incorporate Al-based predictive models to compensate for latency and anticipate user transitions Use a range of sophisticated haptic devices to couple to human movement and touch Accommodate disabled and fatigued users in the spirit of the Every Citizen Interface to the NTH (National Tele-Immersion Initiative).

BIBLIOGRAPHY

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

www .tele-immersion.citris-uc.org www.fp.mcs.anl.gov www.ieee.com www.NTll.com www.advancedorg.tele-immersion.com www.newscientist.com www.internet2.edu www.cis.upenn.edu www.mrl.nyu.edu www.howstuffworks.com

Você também pode gostar