ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15380
15
0

Accelerating Autoregressive Speech Synthesis Inference With Speech Speculative Decoding

21 May 2025
Zijian Lin
Yang Zhang
Yougen Yuan
Yuming Yan
Jinjiang Liu
Zhiyong Wu
Pengfei Hu
Qun Yu
ArXivPDFHTML
Abstract

Modern autoregressive speech synthesis models leveraging language models have demonstrated remarkable performance. However, the sequential nature of next token prediction in these models leads to significant latency, hindering their deployment in scenarios where inference speed is critical. In this work, we propose Speech Speculative Decoding (SSD), a novel framework for autoregressive speech synthesis acceleration. Specifically, our method employs a lightweight draft model to generate candidate token sequences, which are subsequently verified in parallel by the target model using the proposed SSD framework. Experimental results demonstrate that SSD achieves a significant speedup of 1.4x compared with conventional autoregressive decoding, while maintaining high fidelity and naturalness. Subjective evaluations further validate the effectiveness of SSD in preserving the perceptual quality of the target model while accelerating inference.

View on arXiv
@article{lin2025_2505.15380,
  title={ Accelerating Autoregressive Speech Synthesis Inference With Speech Speculative Decoding },
  author={ Zijian Lin and Yang Zhang and Yougen Yuan and Yuming Yan and Jinjiang Liu and Zhiyong Wu and Pengfei Hu and Qun Yu },
  journal={arXiv preprint arXiv:2505.15380},
  year={ 2025 }
}
Comments on this paper