Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Modern deep neural networks (DNNs) have been shown to perform poorly on images affected by blur or noise distortions. In this work we investigate whether human subjects also perform poorly on distorted stimuli, and provide a direct comparison with the performance of deep neural networks. Specifically, we study the effect of Gaussian blur and additive Gaussian noise on human and DNN classification performance. We perform two experiments: one crowd-sourced experiment with unlimited stimulus display time, and one lab experiment with 100ms display time. In both cases we found that humans outperform neural networks on distorted stimuli, even when the networks are retrained with distorted data.
Vision in real environments stabilizes balance compared to an eyes closed condition. For virtual reality to be safe and fully effective in applications such as physical rehabilitation, vision in virtual reality should stabilize balance as much as vision in the real world. Older virtual reality technology was previously found to stabilize balance, but by less than half as much as real world vision. Recent advancements in display technology might allow for vision in virtual reality to be as stabilizing as vision in the real world. This study evaluated whether viewing a virtual environment through the HTC Vive -- a new consumer-grade head mounted display -- stabilizes balance, and whether visual stabilization is similar to that provided by real world vision. Participants viewed the real laboratory or a virtual replica of the laboratory and attempted to maintain an unstable stance with eyes open or closed while standing at one of two viewing distances. Vision was significantly stabilizing in all conditions, but the virtual environment provided less visual stabilization than did the real environment. Regardless of the environment, near viewing led to greater visual stabilization than did far viewing. The smaller stabilizing influence of viewing a virtual compared to real environment might lead to greater risk of falls in virtual reality and smaller gains in physical rehabilitation using virtual reality.
Learning the methods of psychophysics is an essential part of training for perceptual experimentation, and hands-on experience is vital, but gaining this experience is difficult because good tools for learning are not available. The FechDeck is an ordinary deck of playing cards that has been modified to support learning the methods of psychophysics. Card backs are printed with noise patterns that span a range of densities. Faces are augmented with line segments arranged in ?L? patterns. Jokers are printed with ruled faces and with backs that serve as standards. Instructions provided with the FechDeck allows users to perform threshold experiments using Fechner?s methods of adjustment, limits, and constant stimuli, and scaling experiments using Thurstone?s ranking, pair comparison, and successive categories methods, and Stevens?s magnitude estimation method. Spreadsheets provided with the deck support easy data entry and meaningful data analysis. An online repository supporting the FechDeck has been established to facilitate dissemination and to encourage open-source development of the deck.