10
0

Early Attentive Sparsification Accelerates Neural Speech Transcription

Main:3 Pages
8 Figures
Bibliography:2 Pages
2 Tables
Appendix:4 Pages
Abstract

Transformer-based neural speech processing has achieved state-of-the-art performance. Since speech audio signals are known to be highly compressible, here we seek to accelerate neural speech transcription by time-domain signal sparsification early in the neural encoding stage, taking advantage of the interpretability of the self-attention mechanism in transformer audio encoders. With the Whisper family of models, we perform a systematic architecture search over the joint space of sparsification stage (a certain encoder layer) and compression ratio (sparsity). We found that the best resulting solutions under 1% accuracy degradation choose to sparsify the hidden state to 40-60% sparsity at an early encoding stage, and thereby achieve up to 1.6x runtime acceleration in English speech transcription tasks on Nvidia GPUs without any fine-tuning.

View on arXiv
@article{xu2025_2506.15912,
  title={ Early Attentive Sparsification Accelerates Neural Speech Transcription },
  author={ Zifei Xu and Sayeh Sharify and Hesham Mostafa and Tristan Webb and Wanzin Yazar and Xin Wang },
  journal={arXiv preprint arXiv:2506.15912},
  year={ 2025 }
}
Comments on this paper