Eye motions constitute an important part of our daily face-to-face interactions. Even subtle details in the eyes' motions give us clues about a person's thoughts and emotions. Believable and natural animation of the eyes is therefore crucial when creating appealing virtual characters. In this paper, we investigate the perceived naturalness of detailed eye motions, more specifically of jitter of the eyeball rotation and pupil diameter on three virtual characters with differing levels of realism. Participants watched stimuli with six scaling factors from 0 to 1 in increments of 0.2 varying eye rotation and pupil size jitter individually and had to indicate if they would like to increase or decrease the level of jitter to make the animation look more natural. Based on participants' responses, we determine the scaling factors for noise attenuation perceived as most natural for each character when using motion-captured eye motions. We compute the corresponding average jitter amplitudes for the eyeball rotation and pupil size to serve as guidelines for other characters. We find that the amplitudes perceived as most natural depend on the character with our character with a medium level of realism requiring the largest scaling factors.
In computer graphics, illuminating a scene is a complex task, typically consisting of cycles of adjusting and rendering the scene to see the e ects. We propose a technique for visualization of light as a tensor eld via extracting its properties (i.e. intensity, direction, di useness) from radiance measurements and showing these properties as a grid of shapes over a volume of a scene. Presented in the viewport, our visualizations give an understanding of the illumination conditions in the measured volume for both the local values and the global variations of light properties. Additionally, they allow quick inferences of the resulting visual appearance of (objects in) scenes without the need to render them. In our evaluation, observers performed at least as well using visualizations as using renderings when they were comparing illumination between parts of a scene and inferring the nal appearance of objects in the measured volume. Therefore, the proposed visualizations are expected to help lighting professionals by providing perceptually relevant information about the light in a scene.
In this paper, we evaluate various image-space modulation techniques that aim to unobtrusively guide viewers' attention. While previous evaluations mainly target desktop settings, we examine their applicability to ultra wide field of view immersive environments, featuring technical characteristics expected for future-generation head-mounted displays. A custom-built, high-resolution immersive dome environment with high-precision eye tracking is used in our experiments. We investigate gaze guidance success rate and unobtrusiveness of five different techniques. Our results show promising guiding performance for four of the tested methods. With regard to unobtrusiveness we find that while no method remains completely unnoticed many participants do not report any distractions. The evaluated methods show promise to guide users' attention also in wide field of virtual environment applications, e.g. virtually guided tours or field operation training.
In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality (VR) context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this paper is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.