Measuring cognitive load and stress is crucial for ubiquitous human-computer-interactive applications to dynamically understand and respond to users' mental status, e.g. smart healthcare, smart driving and robotics. Researchers have attempted various quantitative methods, such as physiological and behavioral methods. However, as sensitivity, reliability and usability cannot be satisfactorily met at the same time, many current methods are not ideal for ubiquitous applications. In this paper, we propose a novel photoplethysmogram (PPG)-based stress induced vascular index (sVRI) to measure cognitive load and stress. We provide the basic methodology and detailed algorithm framework to validate sVRI measurement. We verify the sensitivity, reliability and usability of the sVRI in transmission mode (sVRI-t) and reflection mode (sVRI-r) respectively by using arithmetic calculations as cognitive tasks. Compared with blood pressure (BP), heart rate variation (HRV), and electrodermal activity (EDA) recorded simultaneously, our findings showed sVRI's potential as a sensitive, reliable and usable parameter, and suggested sVRI-r's potential for integration with ubiquitous touch interactions for dynamic cognition and stress-sensing scenarios.
Underestimation of egocentric distances in immersive virtual environments using various head-mounted displays (HMDs) has been a puzzling topic of research interest for several years. As more commodity level systems become available to developers, it is important to test the variation in underestimation in each system since reasons for underestimation remain elusive. In this paper, we examine several different systems in two experiments and comparatively evaluate how much users underestimate distances using them. To test distance estimation, a standard indirect blind walking task was used. An Oculus Rift DK1, a weighted Oculus Rift DK1, an Oculus Rift DK1 with an artificially restricted field of view, an Nvis SX60, an Nvis SX111, an Oculus Rift DK2, an Oculus Rift consumer version (CV1), and an HTC Vive were tested. The weighted and restricted field of view HMDs were evaluated to determine the effect of these factors on distance underestimation; the other systems were evaluated because they are popular systems that are widely available. We found that weight and field of view restrictions heightened underestimation in the Rift DK1. Results from these were comparable to the Nvis SX60 and SX111. The Oculus Rift in its DK1 and consumer versions possessed the least amount of distance compression, but in general commodity level HMDs provided more accurate estimates of distance than the prior generation of head-mounted displays.
In this article, we investigate human perception of inertial mass discrimination while active planar manipulations, as they are common in daily tasks such as moving heavy and bulky objects. Psychophysical experiments were conducted to develop a human inertial mass perception model to improve usability and acceptance of novel haptically collaborating robotic systems. In contrast to the existing literature, large-scale movements involving a broad selection of reference stimuli and larger sample sizes were used. Linear mixed models were fitted to model dependent errors from the longitudinal perceptual data. Differential thresholds near the perception boundary exponentially increased and resulted in constant behavior for higher stimuli. No effect of different directions (sagittal and transversal), but a large effect of different movement types (precise and imprecise) were found. Recommendations to implement the findings in novel physical assist devices are given.
Sound synthesis is the the process of generating artificial sounds through some form of simulation or modelling. Synthesis has been demonstrated in the area of sound effect, which can be used in production of a range of popular media, such as video games, TV, film, augmented and virtual reality. This paper aims to identify which sound synthesis methods achieve the goal of producing a believable audio sample that may replace a recorded sound sample. A perceptual evaluation experiment of five different sound synthesis techniques was undertaken. Additive synthesis, statistical modelling synthesis with two different feature sets, physically inspired synthesis, concatenative synthesis and sinusoidal modelling synthesis were all compared. Evaluation using eight different sound class stimuli and 66 different samples was undertaken. The additive synthesizer is the only synthesis method not considered significantly different from the reference sample across all sounds classes. The results demonstrate that sound synthesis can be considered as realistic as a recorded sample and makes recommendations for which synthesis methods to be used for different sound classes.
Egocentric distances are often underestimated in virtual environments through head-mounted displays (HMDs). Previous studies suggest that peripheral vision can influence distance perception. Specifically, light in the periphery may improve distance judgments in HMDs. In this study, we conducted a series of experiments with varied virtual peripheral frames around the viewport. We found that the brightness of the peripheral frame significantly influences distance judgments when the frame is brighter than a certain threshold. In addition, we found that applying a pixelation effect in the peripheral-vision area can also trigger improved distance judgments. The result also implies that augmenting peripheral vision with secondary low-resolution displays may improve distance judgments in HMDs. Lastly, we varied the size and shape of the virtual frame. A larger field of view resulted in significantly more accurate distance judgments, and the shape of the frame did not influence distance judgments.
Robotic Assisted Surgeries are commonly used nowadays as a more efficient alternative to traditional surgical options. Both surgeons and patients benefit from those systems as they offer many advantages including less trauma, blood loss, complications, and better ergonomics. However, a remaining limitation of currently available surgical systems is the lack of force feedback due to the teleoperation setting, which prevents direct interaction with the patient. Once the force information is obtained by either a sensing device or indirectly through vision-based force estimation, a concern arises on how to transmit this information to the surgeons. A direct solution is to transmit the force information to the surgeon's hand using a haptic device. However, there are some constraints associated with this option such as cost, stability of the controller, degrees of freedom, and space limitations. An attractive alternative is sensory substitution which allows transcoding information from one sensory modality to another. In particular, in this work we used visual feedback to transmit an effective perception of the interaction forces to the surgeon. Until now, the use of visual modality for force feedback has not been carefully evaluated. For this reason, we conducted an experimental study to prove the potential benefits of using this modality and to understand the surgeons' perceptual preferences among different visual cues to improve their design. We performed a study with twenty-eight surgeons from various specialties and found that 96% of the users preferred visual feedback over no feedback. Based on a careful statistical, graphical, and perceptual analysis, we also provide user-centered recommendations for the design of visual displays for robotic surgical systems.
We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. In an initial experiment, we find that humans are highly consistent in their responses when given a pair of vertices on a 3D model and asked to select the vertex that they perceive to be more soft. This motivates us to take a crowdsourcing and machine learning framework. We collect crowdsourced data for such pairs of vertices. We then combine a learning-to-rank approach and a multi-layer neural network to learn a non-linear softness measure mapping any vertex to a softness value. For a new 3D shape, we can use the learned measure to compute the relative softness of every vertex on its surface. We demonstrate the robustness of our framework with a variety of 3D shapes and compare our non-linear learning approach with a linear method from previous work. Finally, we demonstrate the accuracy of our learned measure with user studies comparing our measure with the human-perceived softness of both virtual and real objects, and we show the usefulness of our measure with some applications.
We present a study into the perception of display brightness as related to the physical size and distance of the screen from the observer. Brightness perception is a complex topic, which is influenced by a number of lower and higher order factors - with empirical evidence from the cinema industry suggesting that display size may play a significant role. To test this hypothesis, we conducted a series of user studies exploring brightness perception for a range of displays and distances from the observer that span representative use scenarios. Our results suggest that retinal size is not sufficient to explain the range of discovered brightness variations, but is sufficient in combination with physical distance from the observer. The resulting model can be used as a step towards perceptually correcting image brightness perception based on target display parameters. This can be leveraged for energy management and the preservation of artistic intent. A pilot study suggests that adaptation luminance is an additional factor for the magnitude of the effect.