Measuring cognitive load and stress is crucial for ubiquitous human-computer-interactive applications to dynamically understand and respond to users' mental status, e.g. smart healthcare, smart driving and robotics. Researchers have attempted various quantitative methods, such as physiological and behavioral methods. However, as sensitivity, reliability and usability cannot be satisfactorily met at the same time, many current methods are not ideal for ubiquitous applications. In this paper, we propose a novel photoplethysmogram (PPG)-based stress induced vascular index (sVRI) to measure cognitive load and stress. We provide the basic methodology and detailed algorithm framework to validate sVRI measurement. We verify the sensitivity, reliability and usability of the sVRI in transmission mode (sVRI-t) and reflection mode (sVRI-r) respectively by using arithmetic calculations as cognitive tasks. Compared with blood pressure (BP), heart rate variation (HRV), and electrodermal activity (EDA) recorded simultaneously, our findings showed sVRI's potential as a sensitive, reliable and usable parameter, and suggested sVRI-r's potential for integration with ubiquitous touch interactions for dynamic cognition and stress-sensing scenarios.
Underestimation of egocentric distances in immersive virtual environments using various head-mounted displays (HMDs) has been a puzzling topic of research interest for several years. As more commodity level systems become available to developers, it is important to test the variation in underestimation in each system since reasons for underestimation remain elusive. In this paper, we examine several different systems in two experiments and comparatively evaluate how much users underestimate distances using them. To test distance estimation, a standard indirect blind walking task was used. An Oculus Rift DK1, a weighted Oculus Rift DK1, an Oculus Rift DK1 with an artificially restricted field of view, an Nvis SX60, an Nvis SX111, an Oculus Rift DK2, an Oculus Rift consumer version (CV1), and an HTC Vive were tested. The weighted and restricted field of view HMDs were evaluated to determine the effect of these factors on distance underestimation; the other systems were evaluated because they are popular systems that are widely available. We found that weight and field of view restrictions heightened underestimation in the Rift DK1. Results from these were comparable to the Nvis SX60 and SX111. The Oculus Rift in its DK1 and consumer versions possessed the least amount of distance compression, but in general commodity level HMDs provided more accurate estimates of distance than the prior generation of head-mounted displays.
We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. In an initial experiment, we find that humans are highly consistent in their responses when given a pair of vertices on a 3D model and asked to select the vertex that they perceive to be more soft. This motivates us to take a crowdsourcing and machine learning framework. We collect crowdsourced data for such pairs of vertices. We then combine a learning-to-rank approach and a multi-layer neural network to learn a non-linear softness measure mapping any vertex to a softness value. For a new 3D shape, we can use the learned measure to compute the relative softness of every vertex on its surface. We demonstrate the robustness of our framework with a variety of 3D shapes and compare our non-linear learning approach with a linear method from previous work. Finally, we demonstrate the accuracy of our learned measure with user studies comparing our measure with the human-perceived softness of both virtual and real objects, and we show the usefulness of our measure with some applications.
We present a study into the perception of display brightness as related to the physical size and distance of the screen from the observer. Brightness perception is a complex topic, which is influenced by a number of lower and higher order factors - with empirical evidence from the cinema industry suggesting that display size may play a significant role. To test this hypothesis, we conducted a series of user studies exploring brightness perception for a range of displays and distances from the observer that span representative use scenarios. Our results suggest that retinal size is not sufficient to explain the range of discovered brightness variations, but is sufficient in combination with physical distance from the observer. The resulting model can be used as a step towards perceptually correcting image brightness perception based on target display parameters. This can be leveraged for energy management and the preservation of artistic intent. A pilot study suggests that adaptation luminance is an additional factor for the magnitude of the effect.