You Are What You Say: Exploiting Linguistic Content for VoicePrivacy Attacks

Speaker anonymization systems hide the identity of speakers while preserving other information such as linguistic content and emotions. To evaluate their privacy benefits, attacks in the form of automatic speaker verification (ASV) systems are employed. In this study, we assess the impact of intra-speaker linguistic content similarity in the attacker training and evaluation datasets, by adapting BERT, a language model, as an ASV system. On the VoicePrivacy Attacker Challenge datasets, our method achieves a mean equal error rate (EER) of 35%, with certain speakers attaining EERs as low as 2%, based solely on the textual content of their utterances. Our explainability study reveals that the system decisions are linked to semantically similar keywords within utterances, stemming from how LibriSpeech is curated. Our study suggests reworking the VoicePrivacy datasets to ensure a fair and unbiased evaluation and challenge the reliance on global EER for privacy evaluations.
View on arXiv@article{gaznepoglu2025_2506.09521, title={ You Are What You Say: Exploiting Linguistic Content for VoicePrivacy Attacks }, author={ Ünal Ege Gaznepoglu and Anna Leschanowsky and Ahmad Aloradi and Prachi Singh and Daniel Tenbrinck and Emanuël A. P. Habets and Nils Peters }, journal={arXiv preprint arXiv:2506.09521}, year={ 2025 } }