2
0

LOVE: Benchmarking and Evaluating Text-to-Video Generation and Video-to-Text Interpretation

Abstract

Recent advancements in large multimodal models (LMMs) have driven substantial progress in both text-to-video (T2V) generation and video-to-text (V2T) interpretation tasks. However, current AI-generated videos (AIGVs) still exhibit limitations in terms of perceptual quality and text-video alignment. Therefore, a reliable and scalable automatic model for AIGV evaluation is desirable, which heavily relies on the scale and quality of human annotations. To this end, we present AIGVE-60K, a comprehensive dataset and benchmark for AI-Generated Video Evaluation, which features (i) comprehensive tasks, encompassing 3,050 extensive prompts across 20 fine-grained task dimensions, (ii) the largest human annotations, including 120K mean-opinion scores (MOSs) and 60K question-answering (QA) pairs annotated on 58,500 videos generated from 30 T2V models, and (iii) bidirectional benchmarking and evaluating for both T2V generation and V2T interpretation capabilities. Based on AIGVE-60K, we propose LOVE, a LMM-based metric for AIGV Evaluation from multiple dimensions including perceptual preference, text-video correspondence, and task-specific accuracy in terms of both instance level and model level. Comprehensive experiments demonstrate that LOVE not only achieves state-of-the-art performance on the AIGVE-60K dataset, but also generalizes effectively to a wide range of other AIGV evaluation benchmarks. These findings highlight the significance of the AIGVE-60K dataset. Database and codes are anonymously available atthis https URL.

View on arXiv
@article{wang2025_2505.12098,
  title={ LOVE: Benchmarking and Evaluating Text-to-Video Generation and Video-to-Text Interpretation },
  author={ Jiarui Wang and Huiyu Duan and Ziheng Jia and Yu Zhao and Woo Yi Yang and Zicheng Zhang and Zijian Chen and Juntong Wang and Yuke Xing and Guangtao Zhai and Xiongkuo Min },
  journal={arXiv preprint arXiv:2505.12098},
  year={ 2025 }
}
Comments on this paper