enter search term and/or author name
Which facial profile do humans expect after seeing a frontal view? a comparison with a linear face model
Matthaeus Schumacher, Volker Blanz
Article No.: 11
Manipulated versions of three-dimensional faces that have different profiles, but almost the same appearance in frontal views, provide a novel way to investigate if and how humans use class-specific knowledge to infer depth from images of faces....
Single-trial EEG classification of artifacts in videos
Maryam Mustafa, Stefan Guthe, Marcus Magnor
Article No.: 12
In this article we use an ElectroEncephaloGraph (EEG) to explore the perception of artifacts that typically appear during rendering and determine the perceptual quality of a sequence of images. Although there is an emerging interest in using an...
Visual and emotional salience influence eye movements
Yaqing Niu, Rebecca M. Todd, Matthew Kyan, Adam K. Anderson
Article No.: 13
In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up) and cognitive/affective (top-down) factors remains controversial: How well...
Minification affects verbal- and action-based distance judgments differently in head-mounted displays
Ruimin Zhang, Anthony Nordman, James Walker, Scott A. Kuhl
Article No.: 14
Numerous studies report that people underestimate egocentric distances in Head-Mounted Display (HMD) virtual environments compared to real environments as measured by direct blind walking. Geometric minification, or rendering graphics with a...
Most methods for synthesizing panoramas assume that the scene is static. A few methods have been proposed for synthesizing stereo or motion panoramas, but there has been little attempt to synthesize panoramas that have both stereo and motion. One...