22
0

State-Space Models in Efficient Whispered and Multi-dialect Speech Recognition

Main:4 Pages
3 Figures
Bibliography:1 Pages
3 Tables
Abstract

Whispered speech recognition presents significant challenges for conventional automatic speech recognition systems, particularly when combined with dialect variation. However, utilizing an efficient method to solve this problem using a low-range dataset and processing load is beneficial. This paper proposes a solution using a Mamba-based state-space model and four fine-tuned self-supervised models consisting of Wav2Vec2, WavLM, HuBERT, and Whisper to address the dual challenges of whispered speech and dialect diversity. Based on our knowledge, this represents the best performance reported on the wTIMIT and CHAINS datasets for whispered speech recognition. We trained the models using whispered and normal speech data across Singaporean, US, and Irish dialects. The findings demonstrated that utilizing the proposed Mamba-based model could work as a highly efficient model trained with low amounts of whispered data to simultaneously work on whispered and normal speech recognition. The code for this work is freely available.

View on arXiv
@article{farhadipour2025_2506.16969,
  title={ State-Space Models in Efficient Whispered and Multi-dialect Speech Recognition },
  author={ Aref Farhadipour and Homayoon Beigi and Volker Dellwo and Hadi Veisi },
  journal={arXiv preprint arXiv:2506.16969},
  year={ 2025 }
}
Comments on this paper