Role-Playing Agents (RPAs), benefiting from large language models, is an emerging interactive AI system that simulates roles or characters with diverse personalities. However, existing methods primarily focus on mimicking dialogues among roles in textual form, neglecting the role's voice traits (e.g., voice style and emotions) as playing a crucial effect in interaction, which tends to be more immersive experiences in realistic scenarios. Towards this goal, we propose OmniCharacter, a first seamless speech-language personality interaction model to achieve immersive RPAs with low latency. Specifically, OmniCharacter enables agents to consistently exhibit role-specific personality traits and vocal traits throughout the interaction, enabling a mixture of speech and language responses. To align the model with speech-language scenarios, we construct a dataset named OmniCharacter-10K, which involves more distinctive characters (20), richly contextualized multi-round dialogue (10K), and dynamic speech response (135K). Experimental results showcase that our method yields better responses in terms of both content and style compared to existing RPAs and mainstream speech-language models, with a response latency as low as 289ms. Code and dataset are available atthis https URL.
View on arXiv@article{zhang2025_2505.20277, title={ OmniCharacter: Towards Immersive Role-Playing Agents with Seamless Speech-Language Personality Interaction }, author={ Haonan Zhang and Run Luo and Xiong Liu and Yuchuan Wu and Ting-En Lin and Pengpeng Zeng and Qiang Qu and Feiteng Fang and Min Yang and Lianli Gao and Jingkuan Song and Fei Huang and Yongbin Li }, journal={arXiv preprint arXiv:2505.20277}, year={ 2025 } }