LLMs and Theories of Meaning
The recent literature on the linguistic capacities (or lack thereof) of large language models (LLMs) invites a host of philosophical questions (Bender & Koller 2020, Cappelen & Dever 2020, Piantadosi & Hill 2022). In this talk, I explore the topic of attributing semantic knowledge to LLMs based on a selection of different theories of meaning from Frege’s Sense (Ohmer et al 2023) and Chomskyan internalism to the semantic externalism (Cappelen & Dever 2020, Mandelkern & Linzen 2023) and Derridean deconstruction. I argue that semantic meaning is not (and was never) exclusively human (as opposed to say pragmatic meaning). Without this assumption, a number of semantic theories become viably applicable to the case of contemporary LLMs, introducing a cline from basic to more complex (human?) meaning recognition capacities depending on the structure of the model in question.
(Jointly sponsored with the Logic and Formal Philosophy Seminar)