OpenAI Whisper is a family of robust Automatic Speech Recognition (ASR) models trained on 680,000 hours of audio. However, its encoder-decoder architecture, trained with a sequence-to-sequence objective, lacks native support for streaming ASR. In this paper, we fine-tune Whisper for streaming ASR using the WeNet toolkit by adopting a Unified Two-pass (U2) structure. We introduce an additional Connectionist Temporal Classification (CTC) decoder trained with causal attention masks to generate streaming partial transcripts, while the original Whisper decoder reranks these partial outputs. Our experiments on LibriSpeech and an earnings call dataset demonstrate that, with adequate fine-tuning data, Whisper can be adapted into a capable streaming ASR model. We also introduce a hybrid tokenizer approach, which uses a smaller token space for the CTC decoder while retaining Whisper's original token space for the attention decoder, resulting in improved data efficiency and generalization.
View on arXiv@article{zhou2025_2506.12154, title={ Adapting Whisper for Streaming Speech Recognition via Two-Pass Decoding }, author={ Haoran Zhou and Xingchen Song and Brendan Fahy and Qiaochu Song and Binbin Zhang and Zhendong Peng and Anshul Wadhawan and Denglin Jiang and Apurv Verma and Vinay Ramesh and Srivas Prasad and Michele M. Franceschini }, journal={arXiv preprint arXiv:2506.12154}, year={ 2025 } }