ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12154
18
0

Adapting Whisper for Streaming Speech Recognition via Two-Pass Decoding

13 June 2025
Haoran Zhou
Xingchen Song
Brendan Fahy
Qiaochu Song
Binbin Zhang
Zhendong Peng
Anshul Wadhawan
Denglin Jiang
Apurv Verma
Vinay Ramesh
Srivas Prasad
Michele M. Franceschini
ArXiv (abs)PDFHTML
Main:4 Pages
4 Figures
Bibliography:1 Pages
3 Tables
Abstract

OpenAI Whisper is a family of robust Automatic Speech Recognition (ASR) models trained on 680,000 hours of audio. However, its encoder-decoder architecture, trained with a sequence-to-sequence objective, lacks native support for streaming ASR. In this paper, we fine-tune Whisper for streaming ASR using the WeNet toolkit by adopting a Unified Two-pass (U2) structure. We introduce an additional Connectionist Temporal Classification (CTC) decoder trained with causal attention masks to generate streaming partial transcripts, while the original Whisper decoder reranks these partial outputs. Our experiments on LibriSpeech and an earnings call dataset demonstrate that, with adequate fine-tuning data, Whisper can be adapted into a capable streaming ASR model. We also introduce a hybrid tokenizer approach, which uses a smaller token space for the CTC decoder while retaining Whisper's original token space for the attention decoder, resulting in improved data efficiency and generalization.

View on arXiv
@article{zhou2025_2506.12154,
  title={ Adapting Whisper for Streaming Speech Recognition via Two-Pass Decoding },
  author={ Haoran Zhou and Xingchen Song and Brendan Fahy and Qiaochu Song and Binbin Zhang and Zhendong Peng and Anshul Wadhawan and Denglin Jiang and Apurv Verma and Vinay Ramesh and Srivas Prasad and Michele M. Franceschini },
  journal={arXiv preprint arXiv:2506.12154},
  year={ 2025 }
}
Comments on this paper