Research Interests: Universal grammar in Optimality Theory, integration of connectionist ('neural') and symbolic computation: computational, linguistic, and philosophical issues
My research (see Research tab) focuses on integrating symbolic and neural network computation for modeling reasoning and, especially, grammar in the human mind/brain. The work is formal and computational, with emerging applications to neuroscience and applied natural language processing. My research has primarily addressed issues of representation and processing rather than learning. Principal contributions (see Publications tab) are to linguistic theory, the theory of vectorial neural network computation, and the philosophical foundations of cognitive science.
During Fall semesters I am on leave from Johns Hopkins, working at Microsoft Research in Redmond, Washington (for a non-technical synopsis of some of my recent work there, see this link: Mind/Brain Networks). Prior to joining the faculty of the Cognitive Science Department at Johns Hopkins, I was a professor in the Computer Science Department and Institute of Cognitive Science at the University of Colorado Boulder. I had been a postdoc at the Center for Cognitive Science at the University of California at San Diego, where I was a founding member of the Parallel Distributed Processing Research Group and worked with Dave Rumelhart, James McClelland and Geoff Hinton. (I also contributed to the User-Centered System Design group led by Don Norman.) My degrees are an A.B. in Physics from Harvard and, from Indiana University, Bloomington, a M.S. in Physics and a Ph.D. in Mathematical Physics.
Goal
Unification of the sciences of mind & brain through integration of
compositional, structured, symbolic computation
at the core of many successful classical theories of the mind
at the core of the theory of neural networks, crucial for
computational models of the brain
emergentist models of the mind
contemporary machine learning and Artificial Intelligence
a branch of continuous mathematics
Current
The theory, and application to language, of Gradient Symbolic Computation, a new cognitive architecture in which a single computational system can simultaneously be described formally at two levels:
a higher ‘abstract mental’ level, where
data
consist of symbols that have partial degrees of presence — gradient activity levels
which blend together to form Gradient Symbol Structures (such as gradient trees)
processing
is algebraic operations on vectors and tensors
a lower ‘abstract neural’ level, where
data
consist of distributed activation vectors over many model neurons
which superimpose to implement Gradient Symbol Structures…
processing
is probabilistic spreading of activation (governed by stochastic differential equations)
through networks with numerically weighted interconnections
AS.050.326/626 Foundations of Cognitive Science.
AS.050.372/672 Foundations of Neural Network Theory
AS.050.829 Research Seminar on Formal Theory in Cognitive Science
AS.050.860 Professional Seminar in Cognitive Science
Below is a list of my primary and secondary PhD student advisees since 1995. To view a complete list of my department’s PhD alumni, please visit our Alumni Placement webpage.
Vertical integration of neural and symbolic computation: Theory and application January 5, 2018, Salt Lake City; Plenary lecture, inaugural meeting of the Society for Computation in Linguistics and symposium on Perceptrons & Syntactic Structures at 60
Discussion session featuring an all-JHU-Cogsci cast (left-to-right): Paul Smolensky (faculty), Matt Goldrick (PhD 2002), Tom McCoy (PhD student), Tal Linzen (faculty), Pyeong Whan Cho (postdoc)
Grammatical theory with Gradient Symbol Structures January 12, 2016, Budapest; Research Institute for Linguistics, Hungarian Academy of Sciences
Four facts about Tensor Product Representations December 12, 2015, Montreal; NIPS workshop Cognitive Computation: Integrating Neural and Symbolic Approaches
Gradient Symbols in Grammar October 26, 2015; Mind, Technology and Society Talk Series, Cognitive and Information Sciences Department, University of California − Merced
Does the success of deep neural network language processing mean — finally — the end of theoretical linguistics? July 31, 2015, Beijing; Invited talk, with Jennifer Culbertson. CoNLL (Conference on Computational Natural Language Learning; SIGNLL of ACL)