28
0

Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues

Main:8 Pages
9 Figures
Bibliography:4 Pages
5 Tables
Appendix:7 Pages
Abstract

Nonverbal communication is integral to human interaction, with gestures, facial expressions, and body language conveying critical aspects of intent and emotion. However, existing large language models (LLMs) fail to effectively incorporate these nonverbal elements, limiting their capacity to create fully immersive conversational experiences. We introduce MARS, a multimodal language model designed to understand and generate nonverbal cues alongside text, bridging this gap in conversational AI. Our key innovation is VENUS, a large-scale dataset comprising annotated videos with time-aligned text, facial expressions, and body language. Leveraging VENUS, we train MARS with a next-token prediction objective, combining text with vector-quantized nonverbal representations to achieve multimodal understanding and generation within a unified framework. Based on various analyses of the VENUS datasets, we validate its substantial scale and high effectiveness. Our quantitative and qualitative results demonstrate that MARS successfully generates text and nonverbal languages, corresponding to conversational input.

View on arXiv
@article{kim2025_2506.00958,
  title={ Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues },
  author={ Youngmin Kim and Jiwan Chung and Jisoo Kim and Sunghyun Lee and Sangkyu Lee and Junhyeok Kim and Cheoljong Yang and Youngjae Yu },
  journal={arXiv preprint arXiv:2506.00958},
  year={ 2025 }
}
Comments on this paper