Our research on distance estimation and perceptual adaptation in real and virtual environments is designed to better understand how people perceive virtual environments. We are especially interested in examining why people underestimate distance in different kinds of virtual environments and how experience in a virtual environment affects perception of the size and distance of objects.
Virtual Environment Facilities
Our large-screen immersive display (LSID) system consists of three 10 ft wide x 8 ft high screens placed at right angles to one another, forming a three-walled room. Computer-generated images are rear projected onto the wall screens by three Projection Design F1+ projectors with a resolution of 1280x1024, providing participants with 270 degrees of nonstereoscopic, immersive visual imagery. A ceiling projector also provides front projected ground imagery on the floor. The primary interfaces for our large-screen immersive display system are an instrumented bicycle and a Woodway treadmill.
Our head-mounted display (HMD) system is an NVIS nVisor ST with optical see through functionality. The HMD contains two small LCOS displays each with resolution of 1280 x 1024 pixels. An Intersense Vistracker IS-1200 6 degrees-of-freedom optical tracker is mounted on the HMD to measure participants’ position and orientation.
Below we describe our individual projects on distance estimation and perceptual adaptation. Click on the titles of projects to download pdfs of the published research.
|Image of target poles in virtual hallway.||
Virtual environments are commonly displayed using one of two technologies: a head-mounted display (HMD) or a large-screen immersive display (LSID) system. Users can also view targets using augmented reality (AR), in which virtual targets are superimposed on a view of the real environment seen through an HMD. Distance perception is commonly measured using one of two tasks: direct blindfolded walking or timed imagined walking. Here, we compared different visual presentation methods using two measurement protocols, while keeping the setting, targets, distances, visual model, and the methods constant.
We asked participants to estimate the distance to a pair of poles located 6 to 18 meters in front of them in a hallway setting. Each participant viewed the same hallway environment in one of the following six presentation methods:
1. Real: unrestricted real-world view of hallway
2. Real+HMD: real-world view of hallway seen through an HMD
3. Virtual+HMD: virtual model of hallway viewed in an HMD
4. Virtual+LSID: virtual model of hallway viewed on multiple large screens
5. Photorealistic+LSID: photograph-based presentation of hallway viewed on multiple large screens
6. AR: augmented reality presentation of virtual target objects superimposed on a real hallway seen through HMD.
Our first experiment compared the first five presentation methods using the timed imagined walking protocol suitable for assessing distance estimation in both LSID and HMD systems. The second experiment compared non-LSID presentation methods (conditions 1 through 3) along with the AR condition using a blindfolded walking protocol.
Somewhat surprisingly, we found a similar level of accuracy when the virtual environment was displayed on the HMD and on the LSID. This result suggests that the ample differences between the two display technologies do not lead to differences in distance perception, at least as measured by the timed imagined walking protocol. We also found that the use of photorealistic visual model did not substantially remedy distance compression experienced by the participants in the virtual environment displayed using a LSID system.
The most glaring difference between the results obtained using timed imagined walking and direct blindfolded walking was the performance of participants in Real+HMD condition. While both measurement methods indicated significant underestimation of distances in Virtual+HMD condition relative to the real world estimates, the difference between Real+HMD and Real conditions was evident only using the blindfolded walking protocol. When participants viewed the targets in the real world through the HMD but imagined moving to the targets while standing in place, there was no difference between the Real and the Real+HMD conditions. When participants viewed the targets through the HMD and then actually walked to the targets, they underestimated distance in the Real+HMD condition relative to the real condition. Thus, the effect of the HMD encumbrance only impacted distance estimates when the participants were required to physically walk to the target. This may be related to a greater effect of the tipping point or pull from the cables when walking than when standing still.
This investigation represents an important step in making direct comparisons of distance estimation across display systems and measurement protocols. Such comparisons have been difficult to make across the many prior studies of distance perception due to the wide variation in critical factors such as the visual targets and settings, the fidelity of the visual virtual model, and the range of distances examined. Our results both confirm conclusions from other studies of distance estimation and raise new questions about distance estimation in real and virtual environments. Further work is needed to determine how multiple factors work together to produce underestimation of distance in virtual environments.