Two CogSci Students Receive NSF GRFs

Two CogSci Students Receive NSF GRFs

PhD students Taylor Martinez and Kathy Garcia have the distinct honor of being selected as 2024 NSF Graduate Research Fellows!

Taylor Martinez

Taylor Martinez, PhD Student

NSF Graduate Research Fellowship in Social Sciences – Linguistics

Project: Are You Hearing What I’m Hearing?: Pragmatic inference and language change in wellness dogwhistles

Advisors: Kyle Rawlins and Barbara Landau

Dogwhistles are terms that send one message to an outgroup while simultaneously sending a second, covert message to an ingroup. Most previous accounts have restricted their analyses to dogwhistles used in a single period of time (i.e., synchronically); therefore, much less is understood about how dogwhistles emerge and change overtime. In my research, I will provide an account for dogwhistles that characterizes the forms of language change they undergo and I will do so using a case study on wellness language.

Background and Research Interests: In 2019, I graduated from Rutgers University with a B.A. in Linguistics and Spanish. After, I went on to be a lab manager in the Rutgers Lab for Developmental Language Studies and the Princeton Baby Lab. Currently, I am a second yeah PhD student and my research interests are broadly about how things like ideological or spatial perspective are encoded in language.  

Kathy Garcia

Kathy Garcia, PhD Student

NSF Graduate Research Fellowship in Psychology (other) – Computational

Project: Large-scale Deep Neural Network Benchmarking in Dynamic Social Vision

Advisor: Leyla Isik

To date, deep learning models trained for computer vision tasks are the best models of human vision. This work has largely focused on neural responses to static images, but the visual world is highly dynamic, and recent work has suggested that in addition to the ventral visual stream specializing in static object recognition, there is a lateral visual stream that processes dynamic, social content. In this work, we investigate the ability of 350+ modern image and video models to predict human neural responses to visual-social content in short video clips. We find that unlike prior benchmarks, even the best image-trained models do a poor job of explaining human behavioral judgements and neural responses. In early and mid-level lateral visual regions, video-trained models predicted neural responses far better than image-trained models. However, prediction by all models was overall lower in lateral than ventral visual regions of the brain, particularly in the superior temporal sulcus. Together, these results reveal a key gap in modern deep learning models’ ability to match human responses to dynamic visual scenes.

Research Interests: NeuroAI; Deep Learning, Cognitive Neuroscience