ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09667
  4. Cited By
Poisoning and Backdooring Contrastive Learning

Poisoning and Backdooring Contrastive Learning

17 June 2021
Nicholas Carlini
Andreas Terzis
ArXivPDFHTML

Papers citing "Poisoning and Backdooring Contrastive Learning"

50 / 118 papers shown
Title
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Hanxun Huang
Sarah Monazam Erfani
Yige Li
Xingjun Ma
James Bailey
AAML
44
0
0
08 May 2025
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF Fingerprinting
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF Fingerprinting
Tianya Zhao
Ningning Wang
Junqing Zhang
Xuyu Wang
AAML
45
0
0
01 May 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
39
0
0
27 Mar 2025
C^2 ATTACK: Towards Representation Backdoor on CLIP via Concept Confusion
Lijie Hu
Junchi Liao
Weimin Lyu
Shaopeng Fu
Tianhao Huang
Shu Yang
Guimin Hu
Di Wang
AAML
67
0
0
12 Mar 2025
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Zhaoyi Liu
Huan Zhang
AAML
80
0
0
25 Feb 2025
PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models
PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models
Xinwei Liu
X. Jia
Yuan Xun
Hua Zhang
Xiaochun Cao
DiffM
AAML
49
0
0
22 Feb 2025
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
43
0
0
28 Jan 2025
Data Free Backdoor Attacks
Data Free Backdoor Attacks
Bochuan Cao
Jinyuan Jia
Chuxuan Hu
Wenbo Guo
Zhen Xiang
Jinghui Chen
Bo-wen Li
Dawn Song
AAML
74
0
0
09 Dec 2024
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
Sizai Hou
Songze Li
Duanyi Yao
AAML
72
0
0
25 Nov 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
31
4
0
13 Oct 2024
Invisibility Cloak: Disappearance under Human Pose Estimation via
  Backdoor Attacks
Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks
Minxing Zhang
Michael Backes
Xiao Zhang
AAML
24
0
0
10 Oct 2024
Backdooring Vision-Language Models with Out-Of-Distribution Data
Backdooring Vision-Language Models with Out-Of-Distribution Data
Weimin Lyu
Jiachen Yao
Saumya Gupta
Lu Pang
Tao Sun
Lingjie Yi
Lijie Hu
Haibin Ling
Chao Chen
VLM
AAML
59
2
0
02 Oct 2024
Contrastive Abstraction for Reinforcement Learning
Contrastive Abstraction for Reinforcement Learning
Vihang Patil
M. Hofmarcher
Elisabeth Rumetshofer
Sepp Hochreiter
OffRL
SSL
24
2
0
01 Oct 2024
Efficient Backdoor Defense in Multimodal Contrastive Learning: A
  Token-Level Unlearning Method for Mitigating Threats
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Kuanrong Liu
Siyuan Liang
Jiawei Liang
Pengwen Dai
Xiaochun Cao
MU
AAML
31
1
0
29 Sep 2024
TrojVLM: Backdoor Attack Against Vision Language Models
TrojVLM: Backdoor Attack Against Vision Language Models
Weimin Lyu
Lu Pang
Tengfei Ma
Haibin Ling
Chao Chen
MLLM
29
7
0
28 Sep 2024
Adversarial Backdoor Defense in CLIP
Adversarial Backdoor Defense in CLIP
Junhao Kuang
Siyuan Liang
Jiawei Liang
Kuanrong Liu
Xiaochun Cao
AAML
36
2
0
24 Sep 2024
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised
  Defense
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense
Jeremy A. Styborski
Mingzhi Lyu
Y. Huang
Adams Kong
39
0
0
13 Sep 2024
Backdoor Defense through Self-Supervised and Generative Learning
Backdoor Defense through Self-Supervised and Generative Learning
Ivan Sabolić
Ivan Grubišić
Siniša Šegvić
AAML
56
0
0
02 Sep 2024
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt
  Learning
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning
Asif Hanif
Fahad Shamshad
Muhammad Awais
Muzammal Naseer
F. Khan
Karthik Nandakumar
Salman Khan
Rao Muhammad Anwer
MedIm
AAML
43
3
0
14 Aug 2024
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream
  Machine Learning Services
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Di Wang
AAML
MIACV
SILM
56
0
0
05 Aug 2024
Downstream Transfer Attack: Adversarial Attacks on Downstream Models
  with Pre-trained Vision Transformers
Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers
Weijie Zheng
Xingjun Ma
Hanxun Huang
Zuxuan Wu
Yu-Gang Jiang
AAML
32
0
0
03 Aug 2024
Vera Verto: Multimodal Hijacking Attack
Vera Verto: Multimodal Hijacking Attack
Minxing Zhang
Wenhao Yang
H. Bidkhori
Yang Zhang
AAML
26
0
0
31 Jul 2024
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders
Tingxu Han
Weisong Sun
Ziqi Ding
Chunrong Fang
Hanwei Qian
Jiaxun Li
Zhenyu Chen
Xiangyu Zhang
AAML
36
7
0
05 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
48
10
0
29 May 2024
TrojFM: Resource-efficient Backdoor Attacks against Very Large
  Foundation Models
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models
Yuzhou Nie
Yanting Wang
Jinyuan Jia
Michael J. De Lucia
Nathaniel D. Bastian
Wenbo Guo
Dawn Song
SILM
AAML
34
5
0
27 May 2024
Breaking the False Sense of Security in Backdoor Defense through
  Re-Activation Attack
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack
Mingli Zhu
Siyuan Liang
Baoyuan Wu
AAML
42
14
0
25 May 2024
BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection
BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection
Yuwei Niu
Shuo He
Qi Wei
Feng Liu
Lei Feng
AAML
35
2
0
24 May 2024
Invisible Backdoor Attack against Self-supervised Learning
Invisible Backdoor Attack against Self-supervised Learning
Hanrong Zhang
Zhenting Wang
Tingxu Han
Mingyu Jin
Chenlu Zhan
Mengnan Du
Hongwei Wang
Shiqing Ma
Hongwei Wang
Shiqing Ma
AAML
SSL
49
2
0
23 May 2024
SEEP: Training Dynamics Grounds Latent Representation Search for
  Mitigating Backdoor Poisoning Attacks
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
42
4
0
19 May 2024
Espresso: Robust Concept Filtering in Text-to-Image Models
Espresso: Robust Concept Filtering in Text-to-Image Models
Anudeep Das
Vasisht Duddu
Rui Zhang
Nadarajah Asokan
EGVM
29
5
0
30 Apr 2024
RankCLIP: Ranking-Consistent Language-Image Pretraining
RankCLIP: Ranking-Consistent Language-Image Pretraining
Yiming Zhang
Zhuokai Zhao
Zhaorun Chen
Zhili Feng
Zenghui Ding
Yining Sun
SSL
VLM
48
7
0
15 Apr 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of
  Foundation Models
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
45
1
0
12 Apr 2024
How to Craft Backdoors with Unlabeled Data Alone?
How to Craft Backdoors with Unlabeled Data Alone?
Yifei Wang
Wenhan Ma
Stefanie Jegelka
Yisen Wang
SyDa
29
0
0
10 Apr 2024
MedBN: Robust Test-Time Adaptation against Malicious Test Samples
MedBN: Robust Test-Time Adaptation against Malicious Test Samples
Hyejin Park
Jeongyeon Hwang
Sunung Mun
Sangdon Park
Jungseul Ok
AAML
TTA
OOD
32
5
0
28 Mar 2024
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal
  Contrastive Learning via Local Token Unlearning
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
Siyuan Liang
Kuanrong Liu
Jiajun Gong
Jiawei Liang
Yuan Xun
Ee-Chien Chang
Xiaochun Cao
AAML
MU
29
13
0
24 Mar 2024
On the Effectiveness of Distillation in Mitigating Backdoors in
  Pre-trained Encoder
On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder
Tingxu Han
Shenghan Huang
Ziqi Ding
Weisong Sun
Yebo Feng
...
Hanwei Qian
Cong Wu
Quanjun Zhang
Yang Liu
Zhenyu Chen
21
8
0
06 Mar 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
VL-Trojan: Multimodal Instruction Backdoor Attacks against
  Autoregressive Visual Language Models
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
Jiawei Liang
Siyuan Liang
Man Luo
Aishan Liu
Dongchen Han
Ee-Chien Chang
Xiaochun Cao
38
37
0
21 Feb 2024
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu
Matthew Y.R. Yang
Gautam Kamath
Yaoliang Yu
AAML
SILM
42
8
0
20 Feb 2024
Instruction Backdoor Attacks Against Customized LLMs
Instruction Backdoor Attacks Against Customized LLMs
Rui Zhang
Hongwei Li
Rui Wen
Wenbo Jiang
Yuan Zhang
Michael Backes
Yun Shen
Yang Zhang
AAML
SILM
30
21
0
14 Feb 2024
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented
  Generation of Large Language Models
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Wei Zou
Runpeng Geng
Binghui Wang
Jinyuan Jia
SILM
31
45
1
12 Feb 2024
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language
  Models
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Yuancheng Xu
Jiarui Yao
Manli Shu
Yanchao Sun
Zichu Wu
Ning Yu
Tom Goldstein
Furong Huang
AAML
30
16
0
05 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
24
5
0
02 Feb 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
36
6
0
14 Dec 2023
Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking
Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking
Shengsheng Qian
Yifei Wang
Dizhan Xue
Shengjie Zhang
Huaiwen Zhang
Changsheng Xu
AAML
41
1
0
13 Dec 2023
Robust Backdoor Detection for Deep Learning via Topological Evolution
  Dynamics
Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics
Xiaoxing Mo
Yechao Zhang
Leo Yu Zhang
Wei Luo
Nan Sun
Shengshan Hu
Shang Gao
Yang Xiang
AAML
17
15
0
05 Dec 2023
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
Yichao Cai
Yuhang Liu
Zhen Zhang
Javen Qinfeng Shi
CLIP
VLM
26
5
0
28 Nov 2023
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Sahil Verma
Gantavya Bhatt
Avi Schwarzschild
Soumye Singhal
Arnav M. Das
Chirag Shah
John P Dickerson
Jeff Bilmes
J. Bilmes
AAML
59
1
0
25 Nov 2023
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
  Learning
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning
Siyuan Liang
Mingli Zhu
Aishan Liu
Baoyuan Wu
Xiaochun Cao
Ee-Chien Chang
32
50
0
20 Nov 2023
Nepotistically Trained Generative-AI Models Collapse
Nepotistically Trained Generative-AI Models Collapse
Matyáš Boháček
Hany Farid
49
17
0
20 Nov 2023
123
Next