CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
- AAML

Multimodal contrastive pretraining has been utilized to train multimodal representation models, like CLIP, on vast amounts of paired image-text data. However, previous studies have highlighted the susceptibility of such models to backdoor attacks. Specifically, when training on backdoored examples, CLIP learns spurious correlations between the embedded backdoor trigger and the target label, aligning their representations in the joint embedding space. With injecting only a few poisoned examples e.g., 75 examples in the 3M pretraining data, the model's behavior can be significantly manipulated, thus making it hard to detect or unlearn such correlations. To address this issue, we propose CleanCLIP, a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by re-aligning the representations for individual modalities independently. CleanCLIP can be employed for both unsupervised finetuning on paired image-text data and for supervised finetuning on labeled image data. We demonstrate that unsupervised finetuning with a combination of multimodal contrastive and unimodal self-supervised objectives for individual modalities can significantly reduce the impact of the backdoor attack. Additionally, supervised finetuning on task-specific labeled data of the individual modality, such as image data, removes the backdoor trigger from the CLIP vision encoder. Empirically, we show that CleanCLIP maintains model performance on benign examples while mitigating the impact of a range of backdoor attacks on multimodal contrastive learning.
View on arXiv