Unifying Streaming and Non-streaming Zipformer-based ASR

There has been increasing interest in unifying streaming and non-streaming automatic speech recognition (ASR) models to reduce development, training, and deployment costs. We present a unified framework that trains a single end-to-end ASR model for both streaming and non-streaming applications, leveraging future context information. We propose to use dynamic right-context through the chunked attention masking in the training of zipformer-based ASR models. We demonstrate that using right-context is more effective in zipformer models compared to other conformer models due to its multi-scale nature. We analyze the effect of varying the number of right-context frames on accuracy and latency of the streaming ASR models. We use Librispeech and large in-house conversational datasets to train different versions of streaming and non-streaming models and evaluate them in a production grade server-client setup across diverse testsets of different domains. The proposed strategy reduces word error by relative 7.9\% with a small degradation in user-perceived latency. By adding more right-context frames, we are able to achieve streaming performance close to that of non-streaming models. Our approach also allows flexible control of the latency-accuracy tradeoff according to customers requirements.
View on arXiv@article{sharma2025_2506.14434, title={ Unifying Streaming and Non-streaming Zipformer-based ASR }, author={ Bidisha Sharma and Karthik Pandia Durai and Shankar Venkatesan and Jeena J Prakash and Shashi Kumar and Malolan Chetlur and Andreas Stolcke }, journal={arXiv preprint arXiv:2506.14434}, year={ 2025 } }