Eye motions constitute an important part of our daily face-to-face interactions. Even subtle details in the eyes' motions give us clues about a person's thoughts and emotions. Believable and natural animation of the eyes is therefore crucial when creating appealing virtual characters. In this paper, we investigate the perceived naturalness of detailed eye motions, more specifically of jitter of the eyeball rotation and pupil diameter on three virtual characters with differing levels of realism. Participants watched stimuli with six scaling factors from 0 to 1 in increments of 0.2 varying eye rotation and pupil size jitter individually and had to indicate if they would like to increase or decrease the level of jitter to make the animation look more natural. Based on participants' responses, we determine the scaling factors for noise attenuation perceived as most natural for each character when using motion-captured eye motions. We compute the corresponding average jitter amplitudes for the eyeball rotation and pupil size to serve as guidelines for other characters. We find that the amplitudes perceived as most natural depend on the character with our character with a medium level of realism requiring the largest scaling factors.
Measuring cognitive load and stress is crucial for ubiquitous human-computer-interactive applications to dynamically understand and respond to users' mental status, e.g. smart healthcare, smart driving and robotics. Researchers have attempted various quantitative methods, such as physiological and behavioral methods. However, as sensitivity, reliability and usability cannot be satisfactorily met at the same time, many current methods are not ideal for ubiquitous applications. In this paper, we propose a novel photoplethysmogram (PPG)-based stress induced vascular index (sVRI) to measure cognitive load and stress. We provide the basic methodology and detailed algorithm framework to validate sVRI measurement. We verify the sensitivity, reliability and usability of the sVRI in transmission mode (sVRI-t) and reflection mode (sVRI-r) respectively by using arithmetic calculations as cognitive tasks. Compared with blood pressure (BP), heart rate variation (HRV), and electrodermal activity (EDA) recorded simultaneously, our findings showed sVRI's potential as a sensitive, reliable and usable parameter, and suggested sVRI-r's potential for integration with ubiquitous touch interactions for dynamic cognition and stress-sensing scenarios.
When refining or personalizing a design, we count on being able to modify or move an element by changing its parameters rather than creating it anew in a different form or location -- a standard utility in graphic and auditory authoring tools. Similarly, we need to tune vibrotactile sensations to fit new use cases, distinguish members of communicative icon sets and personalize items. For tactile vibration display, however, we lack knowledge of the human perceptual mappings which must underlie such tools. Based on evidence that affective dimensions are a natural way to tune vibrations for practical purposes, we attempted to manipulate perception along three emotion dimensions (agitation, liveliness, and strangeness) using engineering parameters of hypothesized relevance. Results from two user studies show that an automatable algorithm can increase a vibration's perceived agitation and liveliness to different degrees via signal energy, while increasing its discontinuity or randomness makes it more strange. These continuous mappings apply across diverse base vibrations; the extent of achievable emotion change varies. These results illustrate the potential for developing vibrotactile emotion controls as efficient tuning tools for designers and end-users.
In computer graphics, illuminating a scene is a complex task, typically consisting of cycles of adjusting and rendering the scene to see the e ects. We propose a technique for visualization of light as a tensor eld via extracting its properties (i.e. intensity, direction, di useness) from radiance measurements and showing these properties as a grid of shapes over a volume of a scene. Presented in the viewport, our visualizations give an understanding of the illumination conditions in the measured volume for both the local values and the global variations of light properties. Additionally, they allow quick inferences of the resulting visual appearance of (objects in) scenes without the need to render them. In our evaluation, observers performed at least as well using visualizations as using renderings when they were comparing illumination between parts of a scene and inferring the nal appearance of objects in the measured volume. Therefore, the proposed visualizations are expected to help lighting professionals by providing perceptually relevant information about the light in a scene.
In this paper, we evaluate various image-space modulation techniques that aim to unobtrusively guide viewers' attention. While previous evaluations mainly target desktop settings, we examine their applicability to ultra wide field of view immersive environments, featuring technical characteristics expected for future-generation head-mounted displays. A custom-built, high-resolution immersive dome environment with high-precision eye tracking is used in our experiments. We investigate gaze guidance success rate and unobtrusiveness of five different techniques. Our results show promising guiding performance for four of the tested methods. With regard to unobtrusiveness we find that while no method remains completely unnoticed many participants do not report any distractions. The evaluated methods show promise to guide users' attention also in wide field of virtual environment applications, e.g. virtually guided tours or field operation training.
In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality (VR) context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this paper is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.
We present a study into the perception of display brightness as related to the physical size and distance of the screen from the observer. Brightness perception is a complex topic, which is influenced by a number of lower and higher order factors - with empirical evidence from the cinema industry suggesting that display size may play a significant role. To test this hypothesis, we conducted a series of user studies exploring brightness perception for a range of displays and distances from the observer that span representative use scenarios. Our results suggest that retinal size is not sufficient to explain the range of discovered brightness variations, but is sufficient in combination with physical distance from the observer. The resulting model can be used as a step towards perceptually correcting image brightness perception based on target display parameters. This can be leveraged for energy management and the preservation of artistic intent. A pilot study suggests that adaptation luminance is an additional factor for the magnitude of the effect.