(May 9, 1995)
Jason Leigh, Christina Vasilakis, and Craig Barnes
Description of the CASA Project
VRML-ready OpenInventor-based Models
Exploring Interface Ideas in Virtual Environments
CASA Intelligent Agents
CAVE-to-CAVE in CASA
Beyond CAVE-to-CAVE: Virtual Worlds with a Legacy
In the latter, the goal has been to hide the computers in the environment while providing support for humans in their everyday activities. There are already many instances where humans have used computers in this manner. These computers have been hidden inside everyday appliances and equipment like automobiles and VCRs. However, it is interesting to observe that the most difficult to use appliances are typically those that have the most computer-like interface (e.g., VCRs and microwave ovens), whereas the ones easiest to use are those that hide the computer interface (e.g., cars and refrigerators). Beyond the management of "creature comforts" for homes, "smart" environments can be invaluable aids in the work environment.
One environment we have been considering is the computer augmentation of the beamline control system at Argonne's Advanced Photon Source. The beamline is a resource for scientists who conduct xray crystallogrphy. The operation of the beamline hardware and software requires a complex series of specific steps that result in the gathering and analysis of gigabytes of data. Since the beamline is a time-shared facility, users operate under strict time constraints that require them to work 24 hours a day for a period of about four days. These stringent conditions, in the least, lead to user fatigue. However, fatigue can lead to user error in the handling of hardware that can result in costly damage of equipment. For example, forgetting to place a beam stop before firing the beamline can destroy a $100,000 detector. In addition, mistakes caused by fatigue can induce errors in data collection, which may eventually corrupt entire experimental results. It is these types of errors which we propose could be avoided through the implementation of computer-augmented environments.
We believe the CAVE is a fascinating possibility for use as a prototyping tool for these types of environments. As we all know, architectural walkthroughs are considered the "killer application" for virtual reality, and the CAVE is one of the better ones for this. We decided it would be interesting to take the next step in architectural walkthroughs, and put some intelligence inside the architectural models -- in essence providing a virtual-reality testbed for designing and debugging "smart" environments.
The first prototype of CASA was displayed recently at EVE4 (Electronic Visualization Event 4) in Chicago on May 9, 1995. The prototype featured a tour through a virtual "smart" home, depicting a house of the future. The prototype served as a means to experiment with a number of enabling technologies, one of which included CAVE-to-CAVE collaboration.
One idea we are pursing with NCSA, that was spawned from CASA was that
of developing Community-VRML (C-VRML). Presently, VRML is a static
model description language much as HTML is a static hypertext
description language. We are interested in pursuing the possibility of
using the CAVE as a browser to bring over a C-VRML document that will
also provide the kind of networked support that will allow multiple
C-VRML viewers to remotely connect to the server site and engage in
collaborative sessions or meetings from within the virtual environment
generated by the C-VRML viewer and the server's database. Currently,
net-surfers access a WEB site, to simply read text, view images etc;
and then they move onto another site- it is a very individual
process. If we established some kind of networked C-VRML viewer,
surfers could connect to a WEB site, and not only read, & see what is
there, but also see who is currently there and be able to interact
with them in a discussion about the material at the site. This is a
lot like MUDing however we're taking MUDing to a higher level by
making the entire Internet and all its WEB sites a huge MUD with
diverse amounts of information and people that you normally would not
find on the current dedicated MUDs.
The reason for using this kind of paradigm is that in contrast to
everyday life where tactile feedback allows you to operate things
without necessarily looking at them, images in virtual environments
still have no tactile properties. Hence when you make selections with
a menu you always have to look at the target first and then use your
wand or glove to make the selection. The task involves three steps,
1. seeing the target, 2. aiming the wand or glove at the target and
3. making a selection. With the InYerFace, step 2 is eliminated
because once the user acquires the target visually, he/she is already
ready to make the selection. One additional benefit of the InYerFace
comes when it is used in Head-Mounted Display, Fish-Tank VR systems,
and the ImmersaDesk(tm). In such systems there is a common problem of
fatigue that occurs due to prolonged raising of the user's arm to make
selections. The InYerFace can be used to greatly reduce this problem
by reducing the number of operations that require arm movements.
The Information Wide Area Year (I-WAY) is a proposed experimental high-performance network linking dozens of the country's fastest computers and advanced visualization environments. This network will be based on Asynchronous Transfer Mode (ATM) technology, an emerging standard for advanced telecommunications networks. The network will support both TCP/IP over ATM and direct ATM oriented protocols. This network will provide the wide-area high-performance backbone for various experimental networking activities at SuperComputing '95. (For more information see: http://www.anl.gov)
The entire CAVE-to-CAVE effort is currently divided into a number of efforts. EVL has had a long standing history and considerable expertise in technology transfer. Consequently EVL's role in the CAVE-to-CAVE effort has been to determine the needs of its potential users and to communicate these needs to Argonne National Laboratory, who are developing the networking software to enable CAVE-to-CAVE communications. EVL is primarily interested in researching new interaction techniques and applications of distributed collaborative virtual environments.
CASA's CAVE-to-CAVE component allowed multiple networked participants (running the CAVE or the CAVE simulator) to explore the same CASA space. Each participant could choose avatars that they designed themselves using various 3D modeling packages. The avatars consisted of a head, a body and a hand. These components derived their orientation and position from the CAVE head-tracker (for avatar head orientation), CAVE world-space navigation (for body position) and CAVE wand-tracker (for avatar hand orientation and position). These separate components provided users with greater expression with their avatars as it allowed gestures such as nodding of the head and waving of the hand.
The CAVE-to-CAVE communications was provided by a communications library developed at EVL, called SpiffNet. SpiffNet is a C++ based library that provided near-transparent access to an internetworked centralized database server. When programming with SpiffNet users worked with data objects as much as they do with objects in the CAVE's shared memory. SpiffNet provides a number of base data types like float and int called nfloats and nints (for network-floats and network-ints), which when instantiated allowed the users to treat them exactly as regular ints and floats in C. The fact that they are actually networked variables that may be shared by other CAVEs is hidden from the programmer. The underlying client manager determines the most optimal way of broadcasting changes in these variables. The central server maintains a consistent database schema of all the networked clients and broadcasts these changes to all clients that need the data, and at data rates compatible with the clients. This prevents slow clients from being inundated with data from the server.
By using a general networking system, rather than a CAVE-specific networking library, we were able to connect other non-CAVE clients to the CAVE. For example we were able to digitize voice on a remote SGI Indy and send amplitude information to the CAVE, that could be mapped onto an avatar's head to simulate lip-synching. This lip-synching might be a lower bandwidth solution to transmitting facial images across conjested network lines.