Communicative and Computational Efficiency in Language
Are languages grammatically structured in a way that minimizes cognitive resource requirements? Memory limitations have been argued to account for crosslinguistic word order regularities, but it has been hard to extract precise quantitative generalizations about language from existing mechanistic models of memory. In this talk, I provide an information-theoretic formalization of memory limitations which enables an explicit calculation of the memory efficiency of languages. I advance the Efficient Tradeoff Hypothesis: the order of elements in language is under simultaneous pressure to minimize surprisal and reduce short-term memory load. I provide empirical evidence in support of the Efficient Tradeoff Hypothesis from four domains: a reanalysis of an artificial language learning experiment, a large-scale study of word order in corpora of 54 languages, a derivation of Greenberg's harmonic order correlations, and an analysis of morpheme order in two agglutinative languages. The results suggest that principles of order in language can be explained via general cognitive principles and lend support to efficiency-based models of the structure of human language.