Linguistics Colloquium

What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Benchmarking

Speaker
Karen Livescu
Affiliation
TTI-Chicago
Date
Fri January 19th 2024, 3:00 - 4:30pm
Location
Margaret Jacks Hall, Greenberg Room (Room 126)

Abstract:  Pre-trained speech representation models have become ubiquitous in automatic speech processing over the past few years.  They have both improved the state of the art and made it feasible to learn task-specific models with very little labeled data.  However, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models.  We consider a number of popular pre-trained models (wav2vec 2.0, HuBERT, and others) and investigate the extent to which their layers encode spectral, phonetic, and word-level information.  The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks.  Finally, I will describe our efforts to benchmark model performance on spoken language understanding tasks, in order to broaden our understanding of the capabilities of state-of-the-art models.

Bio:  Karen Livescu is a Professor at TTI-Chicago. She completed her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguished Lecturer.  She has served as a program chair/co-chair for ICLR, Interspeech, and ASRU, and is an Associate Editor for TACL and IEEE T-PAMI.  Her group's work spans a variety of topics in spoken, written, and signed language processing.

Note: The colloquium will be followed immediately by a social in the Linguistics Kitchen.