ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.00352
  4. Cited By
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

1 August 2021
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
    SILM
    SSL
ArXivPDFHTML

Papers citing "BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning"

50 / 106 papers shown
Title
Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
Junyuan Hong
Yi Zeng
Shuyang Yu
Lingjuan Lyu
R. Jia
Jiayu Zhou
AAML
11
8
0
04 Jun 2023
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang
Zidi Xiong
Bo-wen Li
AAML
29
20
0
29 May 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAML
SILM
37
48
0
28 May 2023
UOR: Universal Backdoor Attacks on Pre-trained Language Models
UOR: Universal Backdoor Attacks on Pre-trained Language Models
Wei Du
Peixuan Li
Bo-wen Li
Haodong Zhao
Gongshen Liu
AAML
39
7
0
16 May 2023
Text-to-Image Diffusion Models can be Easily Backdoored through
  Multimodal Data Poisoning
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning
Shengfang Zhai
Yinpeng Dong
Qingni Shen
Shih-Chieh Pu
Yuejian Fang
Hang Su
32
71
0
07 May 2023
Defense-Prefix for Preventing Typographic Attacks on CLIP
Defense-Prefix for Preventing Typographic Attacks on CLIP
Hiroki Azuma
Yusuke Matsui
VLM
AAML
20
17
0
10 Apr 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
35
44
0
05 Apr 2023
Detecting Backdoors in Pre-trained Encoders
Detecting Backdoors in Pre-trained Encoders
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
82
47
0
23 Mar 2023
SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning
SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning
Mengxin Zheng
Jiaqi Xue
Zihao Wang
Xun Chen
Qian Lou
Lei Jiang
Xiaofeng Wang
26
12
0
16 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
Chong Fu
Xuhong Zhang
S. Ji
Ting Wang
Peng Lin
Yanghe Feng
Jianwei Yin
AAML
43
10
0
28 Feb 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
30
32
0
20 Feb 2023
Backdoor Attacks to Pre-trained Unified Foundation Models
Backdoor Attacks to Pre-trained Unified Foundation Models
Zenghui Yuan
Yixin Liu
Kai Zhang
Pan Zhou
Lichao Sun
AAML
32
10
0
18 Feb 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine
  Learning Pipelines
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
24
2
0
09 Feb 2023
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
  Encoder as a Service
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SILM
AAML
34
4
0
07 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
33
0
18 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders
  with One Target Unlabelled Sample
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample
Jiaqi Xue
Qiang Lou
AAML
22
8
0
20 Nov 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
Rethinking the Reverse-engineering of Trojan Triggers
Rethinking the Reverse-engineering of Trojan Triggers
Zhenting Wang
Kai Mei
Hailun Ding
Juan Zhai
Shiqing Ma
20
45
0
27 Oct 2022
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via
  Contrastive Learning
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning
Xiaoyi Chen
Baisong Xin
Shengfang Zhai
Shiqing Ma
Qingni Shen
Zhonghai Wu
SILM
19
2
0
20 Oct 2022
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li
Ren Pang
Zhaohan Xi
Tianyu Du
S. Ji
Yuan Yao
Ting Wang
AAML
34
25
0
13 Oct 2022
Backdoor Attacks in the Supply Chain of Masked Image Modeling
Backdoor Attacks in the Supply Chain of Masked Image Modeling
Xinyue Shen
Xinlei He
Zheng Li
Yun Shen
Michael Backes
Yang Zhang
46
8
0
04 Oct 2022
The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram
  Matrices
The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices
Wanlun Ma
Derui Wang
Ruoxi Sun
Minhui Xue
S. Wen
Yang Xiang
AAML
19
80
0
23 Sep 2022
SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by
  Self-supervised Learning
SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Peizhuo Lv
Pan Li
Shenchen Zhu
Shengzhi Zhang
Kai Chen
...
Fan Xiang
Yuling Cai
Hualong Ma
Yingjun Zhang
Guozhu Meng
AAML
36
7
0
08 Sep 2022
Machine Learning with Confidential Computing: A Systematization of
  Knowledge
Machine Learning with Confidential Computing: A Systematization of Knowledge
Fan Mo
Zahra Tarkhani
Hamed Haddadi
40
8
0
22 Aug 2022
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
37
12
0
18 Aug 2022
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
  Learning
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning
Tianxing Zhang
Hanzhou Wu
Xiaofeng Lu
Guangling Sun
AAML
27
4
0
08 Aug 2022
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Chang Yue
Peizhuo Lv
Ruigang Liang
Kai Chen
AAML
31
10
0
09 Jul 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural
  Networks via Image Quantization and Contrastive Adversarial Learning
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
133
97
0
26 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
SeqNet: An Efficient Neural Network for Automatic Malware Detection
SeqNet: An Efficient Neural Network for Automatic Malware Detection
Jiawei Xu
Wenxuan Fu
Haoyu Bu
Zhi Wang
Lingyun Ying
AAML
8
3
0
08 May 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
Training with More Confidence: Mitigating Injected and Natural Backdoors
  During Training
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training
Zhenting Wang
Hailun Ding
Juan Zhai
Shiqing Ma
AAML
21
45
0
13 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
20
36
0
11 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
27
187
0
05 Feb 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
  Encoders
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
21
53
0
27 Jan 2022
Watermarking Pre-trained Encoders in Contrastive Learning
Watermarking Pre-trained Encoders in Contrastive Learning
Yutong Wu
Han Qiu
Tianwei Zhang
L. Jiwei
M. Qiu
31
9
0
20 Jan 2022
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image
  Encoders
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Zeyang Sha
Xinlei He
Ning Yu
Michael Backes
Yang Zhang
25
34
0
19 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
25
0
15 Jan 2022
Spinning Language Models: Risks of Propaganda-As-A-Service and
  Countermeasures
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
33
78
0
09 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence
  Graph
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
19
7
0
28 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
EncoderMI: Membership Inference against Pre-trained Encoders in
  Contrastive Learning
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Wenjie Qu
Neil Zhenqiang Gong
4
94
0
25 Aug 2021
Backdoor Attacks on Self-Supervised Learning
Backdoor Attacks on Self-Supervised Learning
Aniruddha Saha
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
AAML
27
101
0
21 May 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Previous
123
Next