ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.24379
55
0

Any2Caption:Interpreting Any Condition to Caption for Controllable Video Generation

31 March 2025
Shengqiong Wu
Weicai Ye
Jiahao Wang
Quande Liu
Xintao Wang
Pengfei Wan
Di Zhang
Kun Gai
Shuicheng Yan
Hao Fei
Tat-Seng Chua
ArXiv (abs)PDFHTML
Abstract

To address the bottleneck of accurate user intent interpretation within the current video generation community, we present Any2Caption, a novel framework for controllable video generation under any condition. The key idea is to decouple various condition interpretation steps from the video synthesis step. By leveraging modern multimodal large language models (MLLMs), Any2Caption interprets diverse inputs--text, images, videos, and specialized cues such as region, motion, and camera poses--into dense, structured captions that offer backbone video generators with better guidance. We also introduce Any2CapIns, a large-scale dataset with 337K instances and 407K conditions for any-condition-to-caption instruction tuning. Comprehensive evaluations demonstrate significant improvements of our system in controllability and video quality across various aspects of existing video generation models. Project Page:this https URL

View on arXiv
@article{wu2025_2503.24379,
  title={ Any2Caption:Interpreting Any Condition to Caption for Controllable Video Generation },
  author={ Shengqiong Wu and Weicai Ye and Jiahao Wang and Quande Liu and Xintao Wang and Pengfei Wan and Di Zhang and Kun Gai and Shuicheng Yan and Hao Fei and Tat-Seng Chua },
  journal={arXiv preprint arXiv:2503.24379},
  year={ 2025 }
}
Comments on this paper