ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05541
37
0

Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting

7 April 2025
Yunlong Tang
Jing Bi
Chao Huang
Susan Liang
Daiki Shimada
Hang Hua
Yunzhong Xiao
Yizhi Song
Pinxin Liu
Mingqian Feng
Junjia Guo
Ziqiang Liu
Luchuan Song
A. Vosoughi
Jinxi He
Liu He
Zeliang Zhang
Jiebo Luo
Chenliang Xu
ArXivPDFHTML
Abstract

We present CAT-V (Caption AnyThing in Video), a training-free framework for fine-grained object-centric video captioning that enables detailed descriptions of user-selected objects through time. CAT-V integrates three key components: a Segmenter based on SAMURAI for precise object segmentation across frames, a Temporal Analyzer powered by TRACE-Uni for accurate event boundary detection and temporal analysis, and a Captioner using InternVL-2.5 for generating detailed object-centric descriptions. Through spatiotemporal visual prompts and chain-of-thought reasoning, our framework generates detailed, temporally-aware descriptions of objects' attributes, actions, statuses, interactions, and environmental contexts without requiring additional training data. CAT-V supports flexible user interactions through various visual prompts (points, bounding boxes, and irregular regions) and maintains temporal sensitivity by tracking object states and interactions across different time segments. Our approach addresses limitations of existing video captioning methods, which either produce overly abstract descriptions or lack object-level precision, enabling fine-grained, object-specific descriptions while maintaining temporal coherence and spatial accuracy. The GitHub repository for this project is available atthis https URL

View on arXiv
@article{tang2025_2504.05541,
  title={ Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting },
  author={ Yunlong Tang and Jing Bi and Chao Huang and Susan Liang and Daiki Shimada and Hang Hua and Yunzhong Xiao and Yizhi Song and Pinxin Liu and Mingqian Feng and Junjia Guo and Zhuo Liu and Luchuan Song and Ali Vosoughi and Jinxi He and Liu He and Zeliang Zhang and Jiebo Luo and Chenliang Xu },
  journal={arXiv preprint arXiv:2504.05541},
  year={ 2025 }
}
Comments on this paper