54
0

Respond Beyond Language: A Benchmark for Video Generation in Response to Realistic User Intents

Main:4 Pages
4 Figures
Bibliography:2 Pages
9 Tables
Appendix:7 Pages
Abstract

Querying generative AI models, e.g., large language models (LLMs), has become a prevalent method for information acquisition. However, existing query-answer datasets primarily focus on textual responses, making it challenging to address complex user queries that require visual demonstrations or explanations for better understanding. To bridge this gap, we construct a benchmark, RealVideoQuest, designed to evaluate the abilities of text-to-video (T2V) models in answering real-world, visually grounded queries. It identifies 7.5K real user queries with video response intents from Chatbot-Arena and builds 4.5K high-quality query-video pairs through a multistage video retrieval and refinement process. We further develop a multi-angle evaluation system to assess the quality of generated video answers. Experiments indicate that current T2V models struggle with effectively addressing real user queries, pointing to key challenges and future research opportunities in multimodal AI.

View on arXiv
@article{wang2025_2506.01689,
  title={ Respond Beyond Language: A Benchmark for Video Generation in Response to Realistic User Intents },
  author={ Shuting Wang and Yunqi Liu and Zixin Yang and Ning Hu and Zhicheng Dou and Chenyan Xiong },
  journal={arXiv preprint arXiv:2506.01689},
  year={ 2025 }
}
Comments on this paper