Flat panels are by far the most common type of television screen. There are reasons, however, to think that curved screens create a greater sense of immersion, reduce distracting reflections, and minimize some perceptual distortions that are commonplace with large televisions. To examine these possibilities, we calculated how screen curvature affects the field of view and the probability of seeing reflections of ambient lights. We find that screen curvature has a small beneficial effect on field of view and a large beneficial effect on the probability of seeing reflections. We also collected behavioral data to characterize perceptual distortions in various viewing configurations. We find that curved screens can in fact reduce problematic perceptual distortions on large screens, but that the benefit depends on the geometry of the projection on such screens.
Foveated rendering is a performance optimization based on the well-known degradation of peripheral visual acuity. It reduces computational costs by showing a high-quality image in the user's central (foveal) vision and a lower quality image in the periphery. Foveated rendering is a promising optimization for Virtual Reality (VR) graphics, and generally requires accurate and low-latency eye tracking to ensure correctness even when a user makes large, fast eye movements such as saccades. However, due to the phenomenon of saccadic omission, it is possible that these requirements may be relaxed. In this paper, we explore the effect of latency for foveated rendering in VR applications. We evaluated the detectability of visual artifacts for three techniques capable of generating foveated images and for three different radii of the high-quality foveal region. Our results show that larger foveal regions allow for more aggressive foveation, but this effect is more pronounced for temporally stable foveation techniques. Added eye tracking latency of 80-150 ms causes a significant reduction in acceptable amount of foveation, but a similar decrease in acceptable foveation was not found for shorter eye tracking latencies of 20-40 ms, suggesting that a total system latency of 50-70 ms could be tolerated.
Unlike their human counterparts, artificial agents such as robots and game characters may be deployed with a large variety of face and body configurations. Some have articulated bodies but lack facial features, and others may be talking heads ending at the neck. Generally, they have many fewer degrees of freedom than humans through which they must express themselves, and there will inevitably be a filtering effect when mapping human motion onto the agent. In this paper, we investigate filtering effects on three types of embodiments, a) an agent with a body but no facial features, b) an agent with a head only and c) an agent with a body and a face. We performed a full performance capture of a mime actor enacting short interactions varying the non-verbal expression along five dimensions (e.g. level of frustration and level of certainty) for each of the three embodiments. We performed a crowd sourced evaluation experiment comparing the video of the actor to the video of an animated robot for the different embodiments and dimensions. Our findings suggest that the face is especially important to pinpoint emotional reactions, but is also most volatile to filtering effects. The body motion on the other hand had more diverse interpretations, but tended to preserve the interpretation after mapping, and thus proved to be more resilient to filtering.
Film directors are masters at controlling what we look at when we watch a film. However, there have been few quantitative studies of how gaze responds to cinematographic conventions thought to influence attention. We have collected and are releasing a data set designed to help investigate eye movements in response to higher level features such as faces, dialogue, camera movements, image composition, and edits. The data set, which will be released to the community, includes gaze information for 21 viewers watching 15 clips from live action 2D films, which have been hand annotated for high level features. This work has implications for the media studies, display technology, immersive reality, and human cognition.