Seeing in 3D is a particularly challenging perceptual task, and it is essential for many visually-guided behaviors. The visual system does not have direct access to 3D information about the environment, so it must be inferred from a variety of visual cues in the retinal images, such as linear perspective, defocus, and binocular disparity. We study how these cues are used and combined in our perception of the 3D world, how these cues affect our visually-guided behaviors, and how 3D perception is shaped by our experiences.
The images cast on our retinas are not random. For example, things that are close to each other, like the leaves on a tree or the clouds in the sky, tend to be similar in brightness and hue. Visual systems have evolved to exploit these patterns, leading to perceptual processes and behaviors that are well adapted to our world. But there are many ways that a system can be “well adapted,” so many open questions remain. Answering these questions is a major thrust of our research: the answers are an essential part of visual neuroscience, and also key for designing new display technologies.
The creation of convincing and practical computer graphics and augmented/virtual reality experiences can be aided by studying the human visual system. We seek to understand when realism is required for accurate perception and comfort, and when it's okay to take shortcuts. We also examine how emerging visual displays can be used to provide assistive technology when vision is impaired.