ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16072
23
4

Describe Anything: Detailed Localized Image and Video Captioning

22 April 2025
Long Lian
Yin Cui
Yunhao Ge
Sifei Liu
Hanzi Mao
Boyi Li
Marco Pavone
Xuan Li
Trevor Darrell
Adam Yala
Huayu Chen
    MLLM
    3DV
    VLM
ArXivPDFHTML
Abstract

Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.

View on arXiv
@article{lian2025_2504.16072,
  title={ Describe Anything: Detailed Localized Image and Video Captioning },
  author={ Long Lian and Yifan Ding and Yunhao Ge and Sifei Liu and Hanzi Mao and Boyi Li and Marco Pavone and Ming-Yu Liu and Trevor Darrell and Adam Yala and Yin Cui },
  journal={arXiv preprint arXiv:2504.16072},
  year={ 2025 }
}
Comments on this paper