ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24424
32
0

Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning

30 May 2025
Amit Peleg
Naman D. Singh
Matthias Hein
    CoGeVLM
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:3 Pages
20 Tables
Appendix:14 Pages
Abstract

Vision-language models like CLIP have demonstrated remarkable zero-shot capabilities in classification and retrieval. However, these models often struggle with compositional reasoning - the ability to understand the relationships between concepts. A recent benchmark, SugarCrepe++, reveals that previous works on improving compositionality have mainly improved lexical sensitivity but neglected semantic understanding. In addition, downstream retrieval performance often deteriorates, although one would expect that improving compositionality should enhance retrieval. In this work, we introduce CLIC (Compositionally-aware Learning in CLIP), a fine-tuning method based on a novel training technique combining multiple images and their associated captions. CLIC improves compositionality across architectures as well as differently pre-trained CLIP models, both in terms of lexical and semantic understanding, and achieves consistent gains in retrieval performance. This even applies to the recent CLIPS, which achieves SOTA retrieval performance. Nevertheless, the short fine-tuning with CLIC leads to an improvement in retrieval and to the best compositional CLIP model on SugarCrepe++. All our models and code are available atthis https URL

View on arXiv
@article{peleg2025_2505.24424,
  title={ Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning },
  author={ Amit Peleg and Naman Deep Singh and Matthias Hein },
  journal={arXiv preprint arXiv:2505.24424},
  year={ 2025 }
}
Comments on this paper