134
0

VocalNet: Speech LLM with Multi-Token Prediction for Faster and High-Quality Generation

Abstract

Speech large language models (LLMs) have emerged as a prominent research focus in speech processing. We introduce VocalNet-1B and VocalNet-8B, a series of high-performance, low-latency speech LLMs enabled by a scalable and model-agnostic training framework designed for real-time voice interaction. Central to our contribution is the first application of multi-token prediction (MTP) to speech LLMs. This approach represents a paradigm shift from standard next-token prediction (NTP), offering simultaneous improvements in generation speed and quality. Informed by analysis of MTP's effect on speech generation and experimental comparisons, we designed a straightforward and highly effective MTP implementation. Experiments demonstrate that VocalNet performs on par with mainstream Omni LLMs even with limited training data, and significantly surpasses existing open-source speech LLMs. To foster reproducibility and community advancement, all model weights, inference code, training data, and framework implementations have been made publicly available atthis https URL

View on arXiv
@article{wang2025_2504.04060,
  title={ VocalNet: Speech LLM with Multi-Token Prediction for Faster and High-Quality Generation },
  author={ Yuhao Wang and Heyang Liu and Ziyang Cheng and Ronghua Wu and Qunshan Gu and Yanfeng Wang and Yu Wang },
  journal={arXiv preprint arXiv:2504.04060},
  year={ 2025 }
}
Comments on this paper