Learnability in Optimality Theory

Learnability in Optimality Theory

Highlighting the close relationship between linguistic explanation and learnability, Bruce Tesar and Paul Smolensky examine the implications of Optimality Theory (OT) for language learnability. They show how the core principles of OT lead to the learning principle of constraint demotion, the basis for a family of algorithms that infer constraint rankings from linguistic forms.

Of primary concern to the authors are the ambiguity of the data received by the learner and the resulting interdependence of the core grammar and the structural analysis of overt linguistic forms. The authors argue that iterative approaches to interdependencies, inspired by work in statistical learning theory, can be successfully adapted to address the interdependencies of language learning. Both OT and Constraint Demotion play critical roles in their adaptation. The authors support their findings both formally and through simulations. They also illustrate how their approach could be extended to other language learning issues, including subset relations and the learning of phonological underlying forms.

By Bruce Tesar and Paul Smolensky