What Inductive Biases Enable Human-Like Syntactic Generalization?

Date
Tue January 21st 2020, 12:00 - 1:15pm
Location
Margaret Jacks Hall, Greenberg Room (460-126)
Tal Linzen
John Hopkins University

 

Humans apply their knowledge of syntax in a systematic way to constructions that are rare or absent in their linguistic input. This observation, traditionally discussed under the banner of the poverty of the stimulus, has motivated the assumption that humans are innately endowed with inductive biases that make crucial reference to syntactic structure. This assumption may appear to be called into question by the applied success of systems based on artificial neural networks, which are not designed to incorporate such biases. In practice, however, such success speaks to this question in an indirect way at best, as engineering benchmarks do not test whether the system in fact generalizes as humans do. In this talk, I will use established psycholinguistic paradigms to examine the syntactic generalization capabilities of contemporary neural network architectures, focusing on the classic cases of subject-verb agreement and subject-auxiliary inversion in English question formation. I will demonstrate how neural networks with and without explicit syntactic structure can be used to test for the necessity and sufficiency of structural inductive biases. Finally, I will present experiments indicating that human-like generalization requires stronger inductive biases than those expressed in standard neural network architectures.