AsyncSwitch: Asynchronous Text-Speech Adaptation for Code-Switched ASR

Developing code-switched ASR systems is challenging due to language ambiguity and limited exposure to multilingual, code-switched data, while collecting such speech is costly. Prior work generates synthetic audio from text, but these methods are computationally intensive and hard to scale. We introduce AsyncSwitch, a novel asynchronous adaptation framework that leverages large-scale, text-rich web data to pre-expose ASR models to diverse code-switched domains before fine-tuning on paired speech-text corpora. Our three-stage process (1) trains decoder self-attention and feedforward layers on code-switched text, (2) aligns decoder and encoder via cross-attention using limited speech-text data, and (3) fully fine-tunes the entire model. Experiments with Whisper on Malay-English code-switching demonstrate a 9.02% relative WER reduction, while improving monolingual performance in Singlish, Malay, and other English variants.
View on arXiv@article{nguyen2025_2506.14190, title={ AsyncSwitch: Asynchronous Text-Speech Adaptation for Code-Switched ASR }, author={ Tuan Nguyen and Huy-Dat Tran }, journal={arXiv preprint arXiv:2506.14190}, year={ 2025 } }