Modern computers perform calculations at speeds that dramatically exceed human performance, and they excel at highly restricted tasks, like strategizing in chess or solving mathematical equations. But even the most advanced AI systems cannot yet match the flexible cognitive abilities of the human brain. The goal of my work is to understand how fundamental cognitive functions, like natural perception, semantic understanding, and commonsense reasoning, are implemented in neural computation. Specifically, my research seeks to reverse engineer the representations and algorithms of human cognition using methods and theories from neuroscience, cognitive psychology, and computer science (e.g., fMRI, neural network models, computer vision, NLP, statistical modeling). This work leverages the overlapping strengths of neuroscience and AI to address problems in vision, memory, semantic understanding, and navigation. Some of the questions addressed by this work include:
- How do we make sense of natural visual scenes?
- How are scenes composed from objects and surfaces?
- How are the statistical properties of natural scenes utilized in neural computation?
- How do we plan navigational behaviors?
- How does high-level vision interact with semantic memory?
- How can we use statistical modeling to understand the information encoded in high-dimensional neural signals?
Before joining the Cognitive Science Department at Johns Hopkins, I worked with Russell Epstein as a postdoctoral fellow in the Department of Psychology at the University of Pennsylvania. I completed my PhD in Neuroscience from the University of Pennsylvania, where I was advised by Murray Grossman.