See my full website for more details, publications, etc.
Constructive nature of scene perception
My research is aimed at understanding how the human mind constructs a visual reality beyond the limited sensory input. For example, the human visual system can sample a single snapshot of a scene at a time, and can only keep track of a handful of objects in the environment. Yet people have a remarkable ability to navigate through an unfamiliar city, comprehend fast movie trailers, and find a friend in a crowd. I am interested in how the mind overcomes limits in visual processing, and how the human visual system has developed mechanisms to perceive a coherent visual world beyond the fragmented input it receives.
- How does the mind overcome limits in visual input, and construct a coherent and continuous representation of a scene over multiple views?
- What are neural mechanisms that support the construction of coherent scene representation, and how does these brain areas play different and complementary roles?
- What are the cues that allow scene integration? How does thinking and memory affect the constructive integration of scenes?
- What are functional properties of scenes and how does it influence our navigation and interactions with objects?
- What kind of tasks share similar processing mechanisms with each other, and how are these limited resources allocated efficiently?
To study these questions, my lab uses methods of behavioral psychophysics and brain imaging (fMRI).
050.204 Visual Cognition (Fall 2011)
050.828 Research Seminar in Cognitive Neuroscience of Vision (Fall 2011)
2008 Ph.D., Cognitive Psychology, Yale University
2008-2011 Post-doctoral associate, Brain and Cognitive Sciences, MIT