ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20612
83
3
v1v2 (latest)

Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models

27 May 2025
Peter Robicheaux
Matvei Popov
Anish Madan
Isaac Robinson
Joseph Nelson
Deva Ramanan
Neehar Peri
    ObjDVLM
ArXiv (abs)PDFHTML
Main:9 Pages
9 Figures
Bibliography:4 Pages
9 Tables
Appendix:13 Pages
Abstract

Vision-language models (VLMs) trained on internet-scale data achieve remarkable zero-shot detection performance on common objects like car, truck, and pedestrian. However, state-of-the-art models still struggle to generalize to out-of-distribution classes, tasks and imaging modalities not typically found in their pre-training. Rather than simply re-training VLMs on more visual data, we argue that one should align VLMs to new concepts with annotation instructions containing a few visual examples and rich textual descriptions. To this end, we introduce Roboflow100-VL, a large-scale collection of 100 multi-modal object detection datasets with diverse concepts not commonly found in VLM pre-training. We evaluate state-of-the-art models on our benchmark in zero-shot, few-shot, semi-supervised, and fully-supervised settings, allowing for comparison across data regimes. Notably, we find that VLMs like GroundingDINO and Qwen2.5-VL achieve less than 2% zero-shot accuracy on challenging medical imaging datasets within Roboflow100-VL, demonstrating the need for few-shot concept alignment. Lastly, we discuss our recent CVPR 2025 Foundational FSOD competition and share insights from the community. Notably, the winning team significantly outperforms our baseline by 16.8 mAP! Our code and dataset are available atthis https URLandthis https URL

View on arXiv
@article{robicheaux2025_2505.20612,
  title={ Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models },
  author={ Peter Robicheaux and Matvei Popov and Anish Madan and Isaac Robinson and Joseph Nelson and Deva Ramanan and Neehar Peri },
  journal={arXiv preprint arXiv:2505.20612},
  year={ 2025 }
}
Comments on this paper