Measuring cognitive load and stress is crucial for ubiquitous human-computer-interactive applications to dynamically understand and respond to users' mental status, e.g. smart healthcare, smart driving and robotics. Researchers have attempted various quantitative methods, such as physiological and behavioral methods. However, as sensitivity, reliability and usability cannot be satisfactorily met at the same time, many current methods are not ideal for ubiquitous applications. In this paper, we propose a novel photoplethysmogram (PPG)-based stress induced vascular index (sVRI) to measure cognitive load and stress. We provide the basic methodology and detailed algorithm framework to validate sVRI measurement. We verify the sensitivity, reliability and usability of the sVRI in transmission mode (sVRI-t) and reflection mode (sVRI-r) respectively by using arithmetic calculations as cognitive tasks. Compared with blood pressure (BP), heart rate variation (HRV), and electrodermal activity (EDA) recorded simultaneously, our findings showed sVRI's potential as a sensitive, reliable and usable parameter, and suggested sVRI-r's potential for integration with ubiquitous touch interactions for dynamic cognition and stress-sensing scenarios.
Underestimation of egocentric distances in immersive virtual environments using various head-mounted displays (HMDs) has been a puzzling topic of research interest for several years. As more commodity level systems become available to developers, it is important to test the variation in underestimation in each system since reasons for underestimation remain elusive. In this paper, we examine several different systems in two experiments and comparatively evaluate how much users underestimate distances using them. To test distance estimation, a standard indirect blind walking task was used. An Oculus Rift DK1, a weighted Oculus Rift DK1, an Oculus Rift DK1 with an artificially restricted field of view, an Nvis SX60, an Nvis SX111, an Oculus Rift DK2, an Oculus Rift consumer version (CV1), and an HTC Vive were tested. The weighted and restricted field of view HMDs were evaluated to determine the effect of these factors on distance underestimation; the other systems were evaluated because they are popular systems that are widely available. We found that weight and field of view restrictions heightened underestimation in the Rift DK1. Results from these were comparable to the Nvis SX60 and SX111. The Oculus Rift in its DK1 and consumer versions possessed the least amount of distance compression, but in general commodity level HMDs provided more accurate estimates of distance than the prior generation of head-mounted displays.
When refining or personalizing a design, we count on being able to modify or move an element by changing its parameters rather than creating it anew in a different form or location -- a standard utility in graphic and auditory authoring tools. Similarly, we need to tune vibrotactile sensations to fit new use cases, distinguish members of communicative icon sets and personalize items. For tactile vibration display, however, we lack knowledge of the human perceptual mappings which must underlie such tools. Based on evidence that affective dimensions are a natural way to tune vibrations for practical purposes, we attempted to manipulate perception along three emotion dimensions (agitation, liveliness, and strangeness) using engineering parameters of hypothesized relevance. Results from two user studies show that an automatable algorithm can increase a vibration's perceived agitation and liveliness to different degrees via signal energy, while increasing its discontinuity or randomness makes it more strange. These continuous mappings apply across diverse base vibrations; the extent of achievable emotion change varies. These results illustrate the potential for developing vibrotactile emotion controls as efficient tuning tools for designers and end-users.
In computer graphics, illuminating a scene is a complex task, typically consisting of cycles of adjusting and rendering the scene to see the e ects. We propose a technique for visualization of light as a tensor eld via extracting its properties (i.e. intensity, direction, di useness) from radiance measurements and showing these properties as a grid of shapes over a volume of a scene. Presented in the viewport, our visualizations give an understanding of the illumination conditions in the measured volume for both the local values and the global variations of light properties. Additionally, they allow quick inferences of the resulting visual appearance of (objects in) scenes without the need to render them. In our evaluation, observers performed at least as well using visualizations as using renderings when they were comparing illumination between parts of a scene and inferring the nal appearance of objects in the measured volume. Therefore, the proposed visualizations are expected to help lighting professionals by providing perceptually relevant information about the light in a scene.
For manufacturing industry, the increasing demand for mass-customization coupled with traditional requirements for delivering products with no sacrifice in quality is currently leading to various challenges. As one consequence of these profound changes, the way in which the production of a novel derivate (e.g. a car) is planned successively transforms from a hardware-based to an entirely digitized process. For instance, today the vast majority of manual assembly tasks within an automotive final assembly line are verified in virtual worlds using recent visualization techniques such as head-mounted displays (HMD). The validity of the results gathered in such simulations and consequently their significance for planning such lines, however, is widely unknown - in particular with regard to human locomotion behavior. Consequently, it is crucial to investigate the behavioral disparity between walking tasks being performed in reality without any equipment and in immersive virtual reality while wearing an HMD, in order to increase the prediction quality of virtual manufacturing in general. This paper therefore presents a holistic evaluation, analyzing human gait in three experiments replicated both in virtual reality and in reality. The experiments cover linear walking, non-linear walking, and obstacle avoidance. The results provide novel insights into the effect of walking in immersive virtual reality on specific gait parameters. For linear walking, the results unveil that the HMD has only a small effect (i.e., 5% 8%) on walking velocity. Regarding non-linear walking towards an oriented target, a negligible influence on the turning pattern can be determined by geometrical path comparison for lower path curvatures, while for higher curvatures a small effect is observed. Finally, while exploring the obstacle avoidance behavior, only minor differences in terms of clearance distance (i.e., H 6%) can be observed in two different obstacle configurations. The overall-differences in terms of walking-behavior are modeled using regression models, thus allowing the general usage within various domains. Summarizing, it can be concluded that VR can be used to analyze and plan human locomotion, however, with the caution that specific details may have to be computationally adjusted in order to transfer findings to reality.
We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. In an initial experiment, we find that humans are highly consistent in their responses when given a pair of vertices on a 3D model and asked to select the vertex that they perceive to be more soft. This motivates us to take a crowdsourcing and machine learning framework. We collect crowdsourced data for such pairs of vertices. We then combine a learning-to-rank approach and a multi-layer neural network to learn a non-linear softness measure mapping any vertex to a softness value. For a new 3D shape, we can use the learned measure to compute the relative softness of every vertex on its surface. We demonstrate the robustness of our framework with a variety of 3D shapes and compare our non-linear learning approach with a linear method from previous work. Finally, we demonstrate the accuracy of our learned measure with user studies comparing our measure with the human-perceived softness of both virtual and real objects, and we show the usefulness of our measure with some applications.
We present a study into the perception of display brightness as related to the physical size and distance of the screen from the observer. Brightness perception is a complex topic, which is influenced by a number of lower and higher order factors - with empirical evidence from the cinema industry suggesting that display size may play a significant role. To test this hypothesis, we conducted a series of user studies exploring brightness perception for a range of displays and distances from the observer that span representative use scenarios. Our results suggest that retinal size is not sufficient to explain the range of discovered brightness variations, but is sufficient in combination with physical distance from the observer. The resulting model can be used as a step towards perceptually correcting image brightness perception based on target display parameters. This can be leveraged for energy management and the preservation of artistic intent. A pilot study suggests that adaptation luminance is an additional factor for the magnitude of the effect.