13
0

A Good CREPE needs more than just Sugar: Investigating Biases in Compositional Vision-Language Benchmarks

Abstract

We investigate 17 benchmarks (e.g. SugarCREPE, VALSE) commonly used for measuring compositional understanding capabilities of vision-language models (VLMs). We scrutinize design choices in their construction, including data source (e.g. MS-COCO) and curation procedures (e.g. constructing negative images/captions), uncovering several inherent biases across most benchmarks. We find that blind heuristics (e.g. token-length, log-likelihood under a language model) perform on par with CLIP models, indicating that these benchmarks do not effectively measure compositional understanding. We demonstrate that the underlying factor is a distribution asymmetry between positive and negative images/captions, induced by the benchmark construction procedures. To mitigate these issues, we provide a few key recommendations for constructing more robust vision-language compositional understanding benchmarks, that would be less prone to such simple attacks.

View on arXiv
@article{udandarao2025_2506.08227,
  title={ A Good CREPE needs more than just Sugar: Investigating Biases in Compositional Vision-Language Benchmarks },
  author={ Vishaal Udandarao and Mehdi Cherti and Shyamgopal Karthik and Jenia Jitsev and Samuel Albanie and Matthias Bethge },
  journal={arXiv preprint arXiv:2506.08227},
  year={ 2025 }
}
Comments on this paper