Modern autoregressive speech synthesis models leveraging language models have demonstrated remarkable performance. However, the sequential nature of next token prediction in these models leads to significant latency, hindering their deployment in scenarios where inference speed is critical. In this work, we propose Speech Speculative Decoding (SSD), a novel framework for autoregressive speech synthesis acceleration. Specifically, our method employs a lightweight draft model to generate candidate token sequences, which are subsequently verified in parallel by the target model using the proposed SSD framework. Experimental results demonstrate that SSD achieves a significant speedup of 1.4x compared with conventional autoregressive decoding, while maintaining high fidelity and naturalness. Subjective evaluations further validate the effectiveness of SSD in preserving the perceptual quality of the target model while accelerating inference.
View on arXiv@article{lin2025_2505.15380, title={ Accelerating Autoregressive Speech Synthesis Inference With Speech Speculative Decoding }, author={ Zijian Lin and Yang Zhang and Yougen Yuan and Yuming Yan and Jinjiang Liu and Zhiyong Wu and Pengfei Hu and Qun Yu }, journal={arXiv preprint arXiv:2505.15380}, year={ 2025 } }