Paul Smolensky

Krieger-Eisenhower Professor;

Krieger 241B
Away during Spring 2017
410-516-5331
smolensky@jhu.edu
Curriculum Vitae
Personal Website
Google Scholar Profile

Biography
Research
Publications
Books
PhD Students
Presentations

My research (see Research tab) focuses on integrating symbolic and neural network computation for modeling reasoning and, especially, grammar in the human mind/brain. The work is formal and computational, with emerging applications to neuroscience and applied natural language processing. My research has primarily addressed issues of representation and processing rather than learning. Principal contributions (see Publications tab) are to linguistic theory, the theory of vectorial neural network computation, and the philosophical foundations of cognitive science.

Prior to joining the faculty of the Cognitive Science Department at Johns Hopkins, I was a professor in the Computer Science Department and Institute of Cognitive Science at the University of Colorado Boulder. I had been a postdoc at the Center for Cognitive Science at the University of California at San Diego, where I was a founding member of the Parallel Distributed Processing Research Group and worked with Dave Rumelhart, James McClelland and Geoff Hinton. (I also contributed to the User-Centered System Design group led by Don Norman.) My degrees are an A.B. in Physics from Harvard and, from Indiana University, Bloomington, a M.S. in Physics and a Ph.D. in Mathematical Physics. (CV)

Goal

Unification of the sciences of mind & brain through integration of

  • compositional, structured, symbolic computation
    • at the core of many successful classical theories of the mind
      • in particular, the theory of language
    • a branch of discrete mathematics
  • dynamic, distributed, vectorial connectionist computation
    • at the core of the theory of neural networks, crucial for
      • computational models of the brain
      • emergentist models of the mind
      • contemporary machine learning and Artificial Intelligence
    • a branch of continuous mathematics

Current

The theory, and application to language, of Gradient Symbolic Computation, a new cognitive architecture in which a single computational system can simultaneously be described formally at two levels:

  • a higher ‘abstract mental’ level, where
    • data
      • consist of symbols that have partial degrees of presence — gradient activity levels
      • which blend together to form Gradient Symbol Structures (such as gradient trees)
    • processing
      • is algebraic operations on vectors and tensors
  • a lower ‘abstract neural’ level, where
    • data
      • consist of distributed activation vectors over many model neurons
      • which superimpose to implement Gradient Symbol Structures…
    • processing
      • is probabilistic spreading of activation (governed by stochastic differential equations)
      • through networks with numerically weighted interconnections

Displaying the 20 most recent publications. View the Google Scholar Profile for complete publications list.

Note: Please refresh the page if no publications initially appear.

C Manning, P Smolensky
Integrating Symbolic and Neural Computation
Annual Review of Linguistics 4 (1), 2017

MT Putnam, G Legendre, P Smolensky
How constrained is language mixing in bi-and uni-modal production?
Linguistic Approaches to Bilingualism 6 (6), 812-816, 2017

P Smolensky, M Lee, X He, W Yih, J Gao, L Deng
Basic reasoning with tensor product representations
arXiv preprint arXiv:1601.02745, 2016

M Lee, X He, W Yih, J Gao, L Deng, P Smolensky
Reasoning in vector space: An exploratory study of question answering
arXiv preprint arXiv:1511.06426, 2015

P Smolensky, M Goldrick, D Mathis
Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition
Cognitive Science 38 (6), 1102-1138, 2014

J Culbertson, P Smolensky, C Wilson
Cognitive biases, linguistic universals, and constraint‐based grammar learning
Topics in cognitive science 5 (3), 392-424, 2013

J Prince
John Prince: A Collection of Documents
Champlain Society, 2013

J Culbertson, P Smolensky
A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal
Cognitive science 36 (8), 1468-1498, 2012

P Smolensky
Symbolic functions from neural computation
Philosophical Transactions of the Royal Society of London A: Mathematical ..., 2012

P Smolensky
Subsymbolic computation theory for the human intuitive processor
Conference on Computability in Europe, 675-685, 2012

J Culbertson, P Smolensky, G Legendre
Learning biases predict a word order universal
Cognition 122 (3), 306-329, 2012

J Culbertson, P Smolensky, G Legendre
Statistical learning constrained by syntactic biases in an artificial language learning task
Proceedings of the 36th Annual Boston University Conference on Language ..., 2012

G Legendre, P Smolensky
On the asymmetrical difficulty of acquiring person reference in French: production versus comprehension
Journal of Logic, Language and Information 21 (1), 7-30, 2012

D Ramadoss, P Smolensky
Tone perception cues: Pitch targets, trajectories, or both?
The Journal of the Acoustical Society of America 129 (4), 2420-2420, 2011

J Culbertston, P Smolensky, G Legendre
Learning biases and constraints on syntactic typology: An artificial language learning approach
Talk presented at the 85th Annual Meeting of the Linguistic Society of ..., 2011

I Berent, T Lennertz, P Smolensky
Markedness and misperception: It’s a two-way street
Handbook of the Syllable, 373-394, 2011

L Hogeweg, G Legendre, P Smolensky
Kinship terminology: polysemy or categorization?
Behavioral and Brain Sciences 33 (05), 386-387, 2010

J Culbertson, P Smolensky
Disharmony in the nominal domain: An artificial language learning approach [slides LAGB]

J Culbertson, P Smolensky, G Legendre
Some evidence for cognitive universals in language and beyond

W Bechtel, M Behrmann, N Chater, RJ Glushko, RL Goldstone, ...
The Rumelhart Prize at 10
Cognitive science 34 (5), 713-715, 2010

Below is a list of my primary and secondary PhD student advisees since 1995. To view a complete list of my department's PhD alumni, please visit our Alumni Placement webpage.

edited 8/2016

Primary Advisor

Name

Current Position

Dissertation. Graduating Year.

Deepti Ramadoss
(co-advisor:
L. Burzio)

Freelance Writer

The phonology and phonetics of tone perception. 2011.

Jennifer Culbertson
(co-advisor:
G. Legendre)

Chancellor's Fellow
Dept of Linguistics & English Language
Univ of Edinburgh

Learning biases, regularization, and the emergence of typological universals in syntax. 2010.

Rebecca Morley

Asst Prof
Dept of Linguistics
Ohio State Univ

Generalization, Lexical Statistics, and Typologically Rare Systems. 2008.

Sara Finley

Asst Prof
Dept of Psychology
Pacific Lutheran Univ

Formal and Cognitive Restrictions on Vowel Harmony. 2008.

Gaja Jarosz

Assoc Prof
Dept of Linguistics
Univ of Massachusetts Amherst

Rich Lexicons and Restrictive Grammars – Maximum Likelihood Learning in Optimality Theory. 2006.

Adam Buchwald
(co-advisor:
B. Rapp)

Assoc Prof
Dept of Communicative
Sciences & Disorders
NYU

Sound structure representation, repair and well-formedness: Grammar in spoken language production. 2005.

Lisa Davidson

Assoc Prof
Dept of Linguistics
NYU

The atoms of phonological representation: Gestures, coordination and perceptual features in consonant cluster phonotactics. 2003.

John Hale

Assoc Prof
Dept of Linguistics
Cornell

Grammar, uncertainty, and sentence processing. 2003.

Bruce Tesar

Professor
Dept of Linguistics
Rutgers

Computational Optimality Theory. 1995. Computer Science, Univ of Colorado

Secondary Advisor

Name

Current Position

Dissertation. Graduating Year.

Tamara Nicol Medina
(primary advisor:
B. Landau)

Asst Prof - Teaching
Dept of Psychology
Univ of Delaware

Learning which verbs allow object omission: Verb semantic selectivity and the implicit object construction. 2007.

Matthew Goldrick
(primary advisor:
B. Rapp)

Assoc Prof
Dept of Linguistics
Northwestern

Patterns in sound, patterns in mind: Phonological regularities in speech production. 2002.

Colin Wilson
(primary advisor:
L. Burzio)

Assoc Prof
Dept of Cognitive Science
JHU

Targeted Constraints: An Approach to Positional Neutralization in Optimality Theory. 2000.

Adamantios Gafos
(primary advisor:
L. Burzio)

Professor
Dept of Linguistics
Postdam Univ

The Articulatory Basis of Locality in Phonology. 1996.

Grammatical theory with Gradient Symbol Structures
January 12, 2016, Budapest; Research Institute for Linguistics, Hungarian Academy of Sciences

Four facts about Tensor Product Representations
December 12, 2015, Montreal; NIPS workshop Cognitive Computation: Integrating Neural and Symbolic Approaches

Gradient Symbols in Grammar
October 26, 2015; Mind, Technology and Society Talk Series, Cognitive and Information Sciences Department, University of California − Merced

Towards Understandable Neural Networks for High Level AI Tasks (short course on Tensor Product Representations)
Fall, 2015; Microsoft Research Talks

Does the success of deep neural network language processing mean — finally — the end of theoretical linguistics?
July 31, 2015, Beijing; Invited talk, with Jennifer Culbertson. CoNLL (Conference on Computational Natural Language Learning; SIGNLL of ACL)

Symbolic roles in vectorial computation
July 14, 2014, Redmond WA; Microsoft Research Faculty Summit panel, Deep Learning for Text Processing