52
0

What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training

Main:4 Pages
2 Figures
Bibliography:1 Pages
1 Tables
Abstract

How language-specific are speech representations learned by self-supervised models? Existing work has shown that a range of linguistic features can be successfully decoded from end-to-end models trained only on speech recordings. However, it's less clear to what extent pre-training on specific languages improves language-specific linguistic information. Here we test the encoding of Dutch phonetic and lexical information in internal representations of self-supervised Wav2Vec2 models. Pre-training exclusively on Dutch improves the representation of Dutch linguistic features as compared to pre-training on similar amounts of English or larger amounts of multilingual data. This language-specific advantage is well-detected by trained clustering or classification probes, and partially observable using zero-shot metrics. Furthermore, the language-specific benefit on linguistic feature encoding aligns with downstream performance on Automatic Speech Recognition.

View on arXiv
@article{kloots2025_2506.00981,
  title={ What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training },
  author={ Marianne de Heer Kloots and Hosein Mohebbi and Charlotte Pouw and Gaofei Shen and Willem Zuidema and Martijn Bentum },
  journal={arXiv preprint arXiv:2506.00981},
  year={ 2025 }
}
Comments on this paper