Probabilistic Language Understanding and Formal Linguistic Theory
Abstract: How should we characterize the relationship between computational modeling and formal theory-building in linguistics? Some (e.g., Chemla et al. 2023) have recently argued that computational modeling is valuable to the extent that it enables rigorous, quantitative assessment of linguistic theories inherited from the formalist paradigm. Drawing from two case studies in the computational framework of Bayesian pragmatics, I stake a second position on this issue. Bayesian pragmatics has undeniably been shaped by antecedent semantic and pragmatic theory; however, I show in my dissertation that this framework has also opened up entirely new analytical and empirical directions for linguistics.
In my first case study, I show that Bayesian models of linguistic interpretation provide a rich basis on which to compare classic Kratzerian and modern probabilistic semantic theories of epistemic modals (e.g., must and might). I demonstrate that the Bayesian pragmatic framework can help to reveal fundamental similarities between these two classes of theory – similarities which previous attempts at theory comparison have obscured. In my second case study, I show how Bayesian modeling allows us to quantitatively compare novel hypotheses regarding context-dependent interpretation in an under-studied domain: artifact nouns (e.g., vehicle and electronic device). This comparison, in turn, yields novel insights about the truth-conditional semantic representation of these nouns.
From these two investigations, I argue that Bayesian pragmatics is best understood not merely as an implementational wing of the formalist paradigm – or as that paradigm’s successor – but as a dialectical partner that refines formal linguistics’ theoretical assumptions and expands its domain of inquiry.