ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.03823
  4. Cited By
Blind Backdoors in Deep Learning Models

Blind Backdoors in Deep Learning Models

8 May 2020
Eugene Bagdasaryan
Vitaly Shmatikov
    AAML
    FedML
    SILM
ArXivPDFHTML

Papers citing "Blind Backdoors in Deep Learning Models"

50 / 162 papers shown
Title
SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by
  Self-supervised Learning
SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Peizhuo Lv
Pan Li
Shenchen Zhu
Shengzhi Zhang
Kai Chen
...
Fan Xiang
Yuling Cai
Hualong Ma
Yingjun Zhang
Guozhu Meng
AAML
28
7
0
08 Sep 2022
Verifiable Encodings for Secure Homomorphic Analytics
Verifiable Encodings for Secure Homomorphic Analytics
Sylvain Chatel
Christian Knabenhans
Apostolos Pyrgelis
Carmela Troncoso
Jean-Pierre Hubaux
28
19
0
28 Jul 2022
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature
  Repair
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair
Hui Xia
Xiugui Yang
X. Qian
Rui Zhang
AAML
27
0
0
26 Jul 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
39
7
0
27 Jun 2022
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning
Baoyuan Wu
Hongrui Chen
Mingda Zhang
Zihao Zhu
Shaokui Wei
Danni Yuan
Chaoxiao Shen
ELM
AAML
33
138
0
25 Jun 2022
Defending Backdoor Attacks on Vision Transformer via Patch Processing
Defending Backdoor Attacks on Vision Transformer via Patch Processing
Khoa D. Doan
Yingjie Lao
Peng Yang
Ping Li
AAML
25
21
0
24 Jun 2022
A Unified Evaluation of Textual Backdoor Learning: Frameworks and
  Benchmarks
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Ganqu Cui
Lifan Yuan
Bingxiang He
Yangyi Chen
Zhiyuan Liu
Maosong Sun
AAML
ELM
SILM
26
68
0
17 Jun 2022
Architectural Backdoors in Neural Networks
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
18
23
0
15 Jun 2022
Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free
  Backdoor Removal via Stabilized Model Inversion
Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free Backdoor Removal via Stabilized Model Inversion
Si-An Chen
Yi Zeng
J. T.Wang
Won Park
Xun Chen
Lingjuan Lyu
Zhuoqing Mao
R. Jia
8
3
0
14 Jun 2022
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline
Junjian Li
Honglong Chen
AAML
11
2
0
02 Jun 2022
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi
Tinghao Xie
Jiachen T. Wang
Tong Wu
Saeed Mahloujifar
Prateek Mittal
AAML
14
47
0
26 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
36
107
0
31 Mar 2022
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in
  Deep Learning
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning
Arezoo Rajabi
Bhaskar Ramasubramanian
Radha Poovendran
AAML
17
4
0
25 Mar 2022
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward
  Error Analysis
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
Yuwei Sun
H. Ochiai
Jun Sakuma
AAML
FedML
37
15
0
22 Mar 2022
Dynamic Backdoors with Global Average Pooling
Dynamic Backdoors with Global Average Pooling
Stefanos Koffas
S. Picek
Mauro Conti
AAML
11
8
0
04 Mar 2022
Physical Backdoor Attacks to Lane Detection Systems in Autonomous
  Driving
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving
Xingshuo Han
Guowen Xu
Yuanpu Zhou
Xuehuan Yang
Jiwei Li
Tianwei Zhang
AAML
30
43
0
02 Mar 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
31
24
0
15 Feb 2022
Training with More Confidence: Mitigating Injected and Natural Backdoors
  During Training
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training
Zhenting Wang
Hailun Ding
Juan Zhai
Shiqing Ma
AAML
15
45
0
13 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
18
36
0
11 Feb 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
  Backfire
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
29
7
0
28 Jan 2022
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object
  Detectors in the Physical World
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Hua Ma
Yinshan Li
Yansong Gao
A. Abuadbba
Zhi-Li Zhang
Anmin Fu
Hyoungshick Kim
S. Al-Sarawi
N. Surya
Derek Abbott
21
34
0
21 Jan 2022
Distributed Machine Learning and the Semblance of Trust
Distributed Machine Learning and the Semblance of Trust
Dmitrii Usynin
Alexander Ziller
Daniel Rueckert
Jonathan Passerat-Palmbach
Georgios Kaissis
18
1
0
21 Dec 2021
Spinning Language Models: Risks of Propaganda-As-A-Service and
  Countermeasures
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
27
76
0
09 Dec 2021
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep
  Neural Network Systems
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
D. Ranasinghe
AAML
41
53
0
19 Nov 2021
Don't Knock! Rowhammer at the Backdoor of DNN Models
Don't Knock! Rowhammer at the Backdoor of DNN Models
M. Tol
Saad Islam
Andrew J. Adiletta
B. Sunar
Ziming Zhang
AAML
27
15
0
14 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
91
50
0
13 Oct 2021
Widen The Backdoor To Let More Attackers In
Widen The Backdoor To Let More Attackers In
Siddhartha Datta
Giulio Lovisotto
Ivan Martinovic
N. Shadbolt
AAML
13
3
0
09 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
25
172
0
07 Oct 2021
Unsolved Problems in ML Safety
Unsolved Problems in ML Safety
Dan Hendrycks
Nicholas Carlini
John Schulman
Jacob Steinhardt
186
273
0
28 Sep 2021
Certifiers Make Neural Networks Vulnerable to Availability Attacks
Certifiers Make Neural Networks Vulnerable to Availability Attacks
Tobias Lorenz
M. Kwiatkowska
Mario Fritz
AAML
SILM
12
2
0
25 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
29
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
17
66
0
30 Jul 2021
Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on
  Communication Efficiency and Trustworthiness
Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness
Yuwei Sun
H. Ochiai
Hiroshi Esaki
FedML
74
45
0
30 Jul 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
38
8
0
22 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
33
81
0
30 Jun 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
15
22
0
14 Jun 2021
Handcrafted Backdoors in Deep Neural Networks
Handcrafted Backdoors in Deep Neural Networks
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
19
71
0
08 Jun 2021
RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP
  Protection for Internet of Things
RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things
Huming Qiu
Hua Ma
Zhi-Li Zhang
Yifeng Zheng
Anmin Fu
Pan Zhou
Yansong Gao
Derek Abbott
S. Al-Sarawi
MQ
19
9
0
09 May 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
27
143
0
01 May 2021
Stealthy Backdoors as Compression Artifacts
Stealthy Backdoors as Compression Artifacts
Yulong Tian
Fnu Suya
Fengyuan Xu
David E. Evans
27
22
0
30 Apr 2021
Robust Backdoor Attacks against Deep Neural Networks in Real Physical
  World
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
31
43
0
15 Apr 2021
PointBA: Towards Backdoor Attacks in 3D Point Cloud
PointBA: Towards Backdoor Attacks in 3D Point Cloud
Xinke Li
Zhirui Chen
Yue Zhao
Zekun Tong
Yabang Zhao
A. Lim
Qiufeng Wang
3DPC
AAML
60
51
0
30 Mar 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
13
10
0
29 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
24
8
0
16 Mar 2021
A Real-time Defense against Website Fingerprinting Attacks
A Real-time Defense against Website Fingerprinting Attacks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
14
19
0
08 Feb 2021
Red Alarm for Pre-trained Models: Universal Vulnerability to
  Neuron-Level Backdoor Attacks
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Zhengyan Zhang
Guangxuan Xiao
Yongwei Li
Tian Lv
Fanchao Qi
Zhiyuan Liu
Yasheng Wang
Xin Jiang
Maosong Sun
AAML
23
67
0
18 Jan 2021
Explainability Matters: Backdoor Attacks on Medical Imaging
Explainability Matters: Backdoor Attacks on Medical Imaging
Munachiso Nwadike
Takumi Miyawaki
Esha Sarkar
Michail Maniatakos
Farah E. Shamout
AAML
16
13
0
30 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
Previous
1234
Next