Logo Poly Laboratoire de recherche en infographie et réalité virtuelle/augmentée (LIRV) English ]
-----

Sommaire d'une activité

(Retour à toutes les activités)


Environnements de visualisation partagés en réalité virtuelle

Entre 2000 et 2003, un projet de recherche et développement financé par le Laboratoire universitaire Bell (LUB) a permis de créer un environnement logiciel distribué qui permet une interaction de personne à personne aussi bien par l'audio, la vidéo, et la communication tactile, que par la visualisation et la manipulation d'objets synthétiques.

En 2002-2003, notre travail a permis la réalisation d'une conférence vidéo à trois personnes entre trois environnements d'immersion 3D hétérogènes: un à l'École polytechnique (CAVE), un autre au Centre de recherche en calcul appliqué (CERCA) (ImmersaDesk) et le dernier à l'Université McGill (CAVE). Cet environnement de conférence vidéo utilise le graphe de scène WLV et les outils de compression d'images développés à McGill. Les recherches actuelles se répartissent entre plusieurs sous-projets correspondant à des applications pratiques du concept de vidéo-conférence en environnement immersif. L'application MolEdit (éditeur de molécules) a ainsi été modifiée dans ce cadre et nous développons également une application indépendante où les participants se retrouvent dans une salle de conférence virtuelle et peuvent interagir sur des objets géométriques issus de modeleurs. Dans cette phase du projet, nous cherchons maintenant à perfectionner l'interface entre l'humain et l'environnement virtuel ainsi qu'à mieux définir les protocoles de partage de scène et d'interaction avec les objets.

A Distributed Shared Visualization Environment

Jeremy R. Cooperstock and Benoît Ozell
Centre for Intelligent Machines and
Centre de recherche en calcul appliqué

Goal

The goal of this project is to create an environment that simultaneously supports distributed computer-mediated human-human interaction through audio, video, and possibly haptic communication, as well as shared visualization and manipulation of synthetic objects.

Objectives

The Distributed Shared Visualization Environment is a project that aims to explore the challenging research problems associated with distributed computer-mediated human-human interaction. The project brings together researchers with a wide range of backgrounds in digital image processing, audio processing, music, haptics, human computer interaction, communications, teleoperation, telepresence and engineering. The focus is not on a single problem domain (e.g. office environments), but rather, seeks to create a general-purpose environment for collaboration and shared visualization.

Shared Reality Environment

In the camera-monitor mediated world of videoconferencing, the limitations of communications bandwidth and equipment capability tend to place a severe handicap on the senses of sight and sound and eliminate the sense of touch. As a result, even in state of the art videoconference rooms using the highest quality equipment, the sense of co-presence enjoyed by individuals in the same room is never fully achieved. Gaze awareness, recognition of facial gestures, social cues through peripheral or background awareness, and sound spatialization through binaural audio, all important characteristics of multi-party interaction, are often lost in a videoconference. While many of these issues can be addressed in part by improved display technology and increased bandwidth, we believe that the result will still be inadequate.

To overcome these limitations, we believe that the computer must play a more active role as an intermediary in the communications. Furthermore, it is necessary to move from the restricted videoconference environments of television monitors and stereo speakers to immersive spaces in which video fills the participant's visual field and is reinforced by spatialized audio cues. Haptic feedback should be introduced to help bridge the physical separation of remote individuals. This feedback could range from reproducing the floor vibrations in response to a user walking about to the tactile response of a surgeon's instrument as it moves through different tissue.

Collaborative Framework in Engineering

In the field of computational based design and high performance computing, engineers and designers in remote locations need to collaborate using videoconferencing, but also need to exchange virtual (synthetic) models and analyze simulation results. Simple simulation, visualization and analysis tasks are usually fulfilled at the same place, on the same computer or on computers connected together by a local area network. But as projects grow and become more complex, high-performance computers are needed to solve problems and high-performance visualization environments are needed to analyze solutions. These facilities are often not available at the same location.

In order for participants to maximize the effectiveness of their discussion and analysis of simulation results, synthetic objects need to be created in 3D as if they were part of the videoconference scene. The participants should be able to manipulate them as if real, to point out certain parts of an object to others, and to identify different views on these objects for others to see. These objects could be displayed on walls or on a table in 3D. Depending on the application, gesture and speech recognition may also be integrated so that the virtual environment responds to various commands.

Benefits to End-User

This project will facilitate and enhance the human interaction in multi-disciplinary design and analysis in engineering and scientific application. It will encourage international collaborations using advanced high-speed networks, enabling researchers to work together, whether their colleagues live across the country or across the ocean, and to access geographically-distributed computing, storage, and display resources.

The usefulness of this type of environment will be greatly enhanced through its manipulative capabilities and functional links to current CAD and CAE packages. Teams scattered around a country or around the world could work together on the same project using the integrated collaborative framework that is proposed. Using the infrastructure of videoconferencing, several persons can work together locally or use global communication networks such as broadband ISDN. This type of collaborative framework for distributed and distance problem solving and analysis finds applications in other fields of science, medicine, architecture, education, etc.

Methodology

The planned testbed consists of three small audio-insulated rooms, two based at McGill University and a third at CERCA, each equipped with high resolution video projectors, cameras, microphones, and multi-channel audio, interconnected by an ATM switch. The video will be rear-projected to cover three walls of each room, thereby encompassing the users' visual field and creating the illusion of a larger shared space. Multi-channel audio will be used to produce effective spatialization of sound sources, enhancing the sense of co-presence. Powerful computer hardware will be exploited to perform advanced, real-time graphics rendering, supporting three-dimensional visualization through stereoscopic shuttered glasses.

The individual components of this project include:

  • acquisition and translation of high-level object descriptions from CAD packages (e.g. CATIA or STEP) and the implementation of a fast and efficient object transport protocol
  • development of audio/video transport protocols for high-fidelity, low-latency communication, based on context-sensitive image capture and compression
  • real-time video blending, background removal, image mosaicing, and perspective transformation of displayed people and objects in each room, accompanied by audio spatialization of sound sources
  • manual gesture recognition for object manipulation: image feature selection using attentional mechanism, feature tracking, and development of a suitable gesture recognition algorithms
  • flexible integration of high-bandwidth audiovisual streams with time-sensitive visualization data: synthetic object generation, rendering, and manipulation in the context of high-fidelity videoconferencing, development of a control interface for rendering and manipulation of synthetic objects

Publications :

Formation de personnel hautement qualifié (PHQ) :


 

© 2000-2024 - Polytechnique Montréal - LIRV - Tous droits réservés