ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.16445
24
5

CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts

28 November 2023
Yichao Cai
Yuhang Liu
Zhen Zhang
Javen Qinfeng Shi
    CLIP
    VLM
ArXivPDFHTML
Abstract

Contrastive vision-language models, such as CLIP, have garnered considerable attention for various downstream tasks, mainly due to the remarkable generalization ability of the learned features. However, the features they learn often blend content and style information, which somewhat limits their generalization capabilities under distribution shifts. To address this limitation, we adopt a causal generative perspective for multimodal data and propose contrastive learning with data augmentation to disentangle content features from the original representations. To achieve this, we begin by exploring image augmentation techniques and develop a method to seamlessly integrate them into pre-trained CLIP-like models to extract pure content features. Taking a step further, and recognizing the inherent semantic richness and logical structure of text data, we explore the use of text augmentation to isolate latent content from style features. This enables CLIP-like models' encoders to concentrate on latent content information, refining the representations learned by pre-trained CLIP-like models. Our extensive experiments across diverse datasets demonstrate significant improvements in zero-shot and few-shot classification tasks, alongside enhanced robustness to various perturbations. These results underscore the effectiveness of our proposed methods in refining vision-language representations and advancing the state of the art in multimodal learning.

View on arXiv
@article{cai2025_2311.16445,
  title={ CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts },
  author={ Yichao Cai and Yuhang Liu and Zhen Zhang and Javen Qinfeng Shi },
  journal={arXiv preprint arXiv:2311.16445},
  year={ 2025 }
}
Comments on this paper