15
0

TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP

Main:7 Pages
3 Figures
Bibliography:3 Pages
12 Tables
Appendix:5 Pages
Abstract

Vision-language models (VLMs), such as CLIP, have demonstrated strong performance across a range of downstream tasks. However, CLIP is still limited in negation understanding: the ability to recognize the absence or exclusion of a concept. Existing methods address the problem by using a large language model (LLM) to generate large-scale data of image captions containing negation for further fine-tuning CLIP. However, these methods are both time- and compute-intensive, and their evaluations are typically restricted to image-text matching tasks. To expand the horizon, we (1) introduce a training-time negation data generation pipeline such that negation captions are generated during the training stage, which only increases 2.5% extra training time, and (2) we propose the first benchmark, Neg-TtoI, for evaluating text-to-image generation models on prompts containing negation, assessing model's ability to produce semantically accurate images. We show that our proposed method, TNG-CLIP, achieves SOTA performance on diverse negation benchmarks of image-to-text matching, text-to-image retrieval, and image generation.

View on arXiv
@article{cai2025_2505.18434,
  title={ TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP },
  author={ Yuliang Cai and Jesse Thomason and Mohammad Rostami },
  journal={arXiv preprint arXiv:2505.18434},
  year={ 2025 }
}
Comments on this paper