We investigated how two people jointly coordinate their decisions and actions in a co-occupied, large-screen virtual environment. The task for participants was to physically cross a virtual road with continuous traffic without getting hit by a car. Participants performed this task either alone or with another person. Experiment 1 first established that stereo vs. non-stereo rendering had little impact on solo road-crossing performance. Experiment 2 capitalized on these results to study how people jointly performed the road-crossing task in a non-stereo virtual environment. By rendering two separate streams of non-stereo images based on the two viewers eye-points, we displayed the correct perspective for both viewers in the co-occupied virtual environment. We found that pairs often crossed the same gap together and closely synchronized their movements when crossing. Pairs also chose larger gaps than individuals presumably to accommodate the extra time needed to cross through gaps together. These results provide baseline information for how two people interact and coordinate their behaviors in performing whole-body, joint motions in a co-occupied virtual environment. This study also provides a foundation for future studies examining joint actions in shared VEs where participants are represented by graphic avatars.
Sound synthesis is the the process of generating artificial sounds through some form of simulation or modelling. Synthesis has been demonstrated in the area of sound effect, which can be used in production of a range of popular media, such as video games, TV, film, augmented and virtual reality. This paper aims to identify which sound synthesis methods achieve the goal of producing a believable audio sample that may replace a recorded sound sample. A perceptual evaluation experiment of five different sound synthesis techniques was undertaken. Additive synthesis, statistical modelling synthesis with two different feature sets, physically inspired synthesis, concatenative synthesis and sinusoidal modelling synthesis were all compared. Evaluation using eight different sound class stimuli and 66 different samples was undertaken. The additive synthesizer is the only synthesis method not considered significantly different from the reference sample across all sounds classes. The results demonstrate that sound synthesis can be considered as realistic as a recorded sample and makes recommendations for which synthesis methods to be used for different sound classes.
Egocentric distances are often underestimated in virtual environments through head-mounted displays (HMDs). Previous studies suggest that peripheral vision can influence distance perception. Specifically, light in the periphery may improve distance judgments in HMDs. In this study, we conducted a series of experiments with varied virtual peripheral frames around the viewport. We found that the brightness of the peripheral frame significantly influences distance judgments when the frame is brighter than a certain threshold. In addition, we found that applying a pixelation effect in the peripheral-vision area can also trigger improved distance judgments. The result also implies that augmenting peripheral vision with secondary low-resolution displays may improve distance judgments in HMDs. Lastly, we varied the size and shape of the virtual frame. A larger field of view resulted in significantly more accurate distance judgments, and the shape of the frame did not influence distance judgments.
For very rough surfaces, friction-induced vibrations contain frequencies that shift multiplicatively with sliding speed. Given the poor capacity of the somatosensory system to discriminate frequencies, this fact raises the question of how accurately finger sliding speed must be known during the reproduction of virtual textures with a tactile display. During active touch, ten observers were asked to discriminate texture recordings corresponding to different speeds. The samples were constructed from a common texture which was resampled at various frequencies to give a set of stimuli of different swiping speeds. In trials, they swiped their finger in rapid succession over a glass plate which vibrated to reproduce three texture recordings. Two of these recordings were identical and the third differed in that the sample represented a texture swiped at a speed different from the other two. Observers identified which of the three samples felt different. Seven observers reported differences when the speed varied by 60, 80 and 100 millimetres per second while the other three did not reach a discrimination threshold. These results show that the need for high-accuracy measurement of swiping speed during texture reproduction may actually be quite limited compared to what is commonly found in the literature.
Distance is commonly underperceived in virtual environments compared to real environments. Past work suggests that displaying a replica VE based on the real surrounding environment leads to more accurate judgments of distance, but that work has lacked the necessary control conditions to firmly make this conclusion. Other research indicates that walking through a VE with visual feedback improves judgments of distance and size. This study evaluated and compared those two methods for improving perceived distance in virtual environments (VEs). All participants experienced a replica VE based on the real lab. In one condition, participants visually previewed the real lab prior to experiencing the replica VE, and in another condition they did not. Participants performed blind-walking judgments of distance and also judgments of size in the replica VE before and after walking interaction. Distance judgments were more accurate in the preview compared to no preview condition, but size judgments were unaffected by visual preview. Distance judgments and size judgments increased after walking interaction, and the improvement was larger for distance than for size judgments. After walking interaction, distance judgments did not differ based on visual preview, and walking interaction led to a larger improvement in judged distance than did visual preview. These data suggest that walking interaction may be more effective than visual preview as a method for improving perceived space in a VE.
Particle-based simulations are used across many science domains, and it is well known that stereoscopic viewing and kinetic depth enhance our ability to perceive the 3D structure of such data. But the relative advantages of stereo and kinetic depth have not been studied for point cloud data, although they have been studied for 3D networks. This paper reports two experiments assessing human ability to perceive 3D structures in point clouds as a function of different viewing parameters. In the first study the number of discrete views was varied to determine the extent to which smooth motion is needed. Also, half the trials had stereoscopic viewing and half had no stereo. The results showed kinetic depth to be more beneficial than stereo viewing so long as the motion was smooth. The second experiment varied the amplitude of oscillatory motion from zero to sixteen degrees. The results showed an increase in detection rate with amplitude, with the best amplitudes being four degrees and greater.