Why is monotonicity widespread in language? 

Speaker
Jakub Szymanik
Affiliation
University of Amsterdam
Date
Thu June 3rd 2021, 10:00 - 11:20am

Please bwaldon [at] stanford.edu (email) for the Zoom link.

Despite extraordinary differences between natural languages, linguists have identified many semantic universals – shared properties of meaning – yet to receive a unified explanation. The prominent example comes from the domain of function words (e.g., natural languages lexicalize only monotone quantifiers) and content words (e.g., color terms denote convex regions of color space). In the talk, I will argue that semantic universals, such as monotonicity and its close relative convexity, are to be explained in terms of learnability. Monotone meanings are easier to learn for both computational cognitive models and human subjects in the lab. Furthermore, monotone quantifiers emerge in cultural evolution when the agents are biased by simplicity towards monotonicity.