We propose Windowed Inference for Non-blank Detection (WIND), a novel strategy that significantly accelerates RNN-T inference without compromising model accuracy. During model inference, instead of processing frames sequentially, WIND processes multiple frames simultaneously within a window in parallel, allowing the model to quickly locate non-blank predictions during decoding, resulting in significant speed-ups. We implement WIND for greedy decoding, batched greedy decoding with label-looping techniques, and also propose a novel beam-search decoding method. Experiments on multiple datasets with different conditions show that our method, when operating in greedy modes, speeds up as much as 2.4X compared to the baseline sequential approach while maintaining identical Word Error Rate (WER) performance. Our beam-search algorithm achieves slightly better accuracy than alternative methods, with significantly improved speed. We will open-source our WIND implementation.
View on arXiv@article{xu2025_2505.13765, title={ WIND: Accelerated RNN-T Decoding with Windowed Inference for Non-blank Detection }, author={ Hainan Xu and Vladimir Bataev and Lilit Grigoryan and Boris Ginsburg }, journal={arXiv preprint arXiv:2505.13765}, year={ 2025 } }