31
0

GigaAM: Efficient Self-Supervised Learner for Speech Recognition

Main:4 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Abstract

Self-Supervised Learning (SSL) has demonstrated strong performance in speech processing, particularly in automatic speech recognition. In this paper, we explore an SSL pretraining framework that leverages masked language modeling with targets derived from a speech recognition model. We also present chunkwise attention with dynamic chunk size sampling during pretraining to enable both full-context and streaming fine-tuning. Our experiments examine scaling with respect to model size and the amount of data. Using our method, we train the GigaAM family of models, including a state-of-the-art model for Russian speech recognition that outperforms Whisper-large-v3 by 50%. We have released our foundation and ASR models, along with the inference code, under the MIT license as open-source resources to the research community. Available atthis https URL.

View on arXiv
@article{kutsakov2025_2506.01192,
  title={ GigaAM: Efficient Self-Supervised Learner for Speech Recognition },
  author={ Aleksandr Kutsakov and Alexandr Maximenko and Georgii Gospodinov and Pavel Bogomolov and Fyodor Minkin },
  journal={arXiv preprint arXiv:2506.01192},
  year={ 2025 }
}
Comments on this paper