88
3

Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models

Main:9 Pages
9 Figures
Bibliography:4 Pages
9 Tables
Appendix:13 Pages
Abstract

Vision-language models (VLMs) trained on internet-scale data achieve remarkable zero-shot detection performance on common objects like car, truck, and pedestrian. However, state-of-the-art models still struggle to generalize to out-of-distribution classes, tasks and imaging modalities not typically found in their pre-training. Rather than simply re-training VLMs on more visual data, we argue that one should align VLMs to new concepts with annotation instructions containing a few visual examples and rich textual descriptions. To this end, we introduce Roboflow100-VL, a large-scale collection of 100 multi-modal object detection datasets with diverse concepts not commonly found in VLM pre-training. We evaluate state-of-the-art models on our benchmark in zero-shot, few-shot, semi-supervised, and fully-supervised settings, allowing for comparison across data regimes. Notably, we find that VLMs like GroundingDINO and Qwen2.5-VL achieve less than 2% zero-shot accuracy on challenging medical imaging datasets within Roboflow100-VL, demonstrating the need for few-shot concept alignment. Our code and dataset are available at this https URL and this https URL

View on arXiv
@article{robicheaux2025_2505.20612,
  title={ Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models },
  author={ Peter Robicheaux and Matvei Popov and Anish Madan and Isaac Robinson and Joseph Nelson and Deva Ramanan and Neehar Peri },
  journal={arXiv preprint arXiv:2505.20612},
  year={ 2025 }
}
Comments on this paper