The rapid advancement of large language models (LLMs) has accelerated the development of multi-modal models capable of vocal communication. Unlike text-based interactions, speech conveys rich and diverse information, including semantic content, acoustic variations, paralanguage cues, and environmental context. However, existing evaluations of speech interaction models predominantly focus on the quality of their textual responses, often overlooking critical aspects of vocal performance and lacking benchmarks with vocal-specific test instances. To address this gap, we propose VocalBench, a comprehensive benchmark designed to evaluate speech interaction models' capabilities in vocal communication. VocalBench comprises 9,400 carefully curated instances across four key dimensions: semantic quality, acoustic performance, conversational abilities, and robustness. It covers 16 fundamental skills essential for effective vocal interaction. Experimental results reveal significant variability in current model capabilities, each exhibiting distinct strengths and weaknesses, and provide valuable insights to guide future research in speech-based interaction systems. Code and evaluation instances are available atthis https URL.
View on arXiv@article{liu2025_2505.15727, title={ VocalBench: Benchmarking the Vocal Conversational Abilities for Speech Interaction Models }, author={ Heyang Liu and Yuhao Wang and Ziyang Cheng and Ronghua Wu and Qunshan Gu and Yanfeng Wang and Yu Wang }, journal={arXiv preprint arXiv:2505.15727}, year={ 2025 } }