ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.08745
  4. Cited By
Backdoor Learning: A Survey
v1v2v3v4v5 (latest)

Backdoor Learning: A Survey

17 July 2020
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
    AAML
ArXiv (abs)PDFHTMLGithub (1107★)

Papers citing "Backdoor Learning: A Survey"

50 / 170 papers shown
Title
RAB: Provable Robustness Against Backdoor Attacks
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
71
162
0
19 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
77
72
0
19 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
94
72
0
09 Mar 2020
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Yue Gao
Harrison Rosenberg
Kassem Fawaz
S. Jha
Justin Hsu
AAML
52
6
0
03 Mar 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
82
136
0
26 Feb 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
Xinyu Lin
Xue Lin
AAML
63
49
0
26 Feb 2020
On Hiding Neural Networks Inside Neural Networks
On Hiding Neural Networks Inside Neural Networks
Chuan Guo
Ruihan Wu
Kilian Q. Weinberger
16
6
0
24 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
277
834
0
19 Feb 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAMLOnRL
54
20
0
19 Feb 2020
Targeted Forgetting and False Memory Formation in Continual Learners
  through Adversarial Backdoor Attacks
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks
Muhammad Umer
Glenn Dawson
R. Polikar
AAML
24
17
0
17 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OODAAML
59
156
0
07 Feb 2020
Learning to Detect Malicious Clients for Robust Federated Learning
Learning to Detect Malicious Clients for Robust Federated Learning
Suyi Li
Yong Cheng
Wei Wang
Yang Liu
Tianjian Chen
AAMLFedML
109
225
0
01 Feb 2020
Backdoor Attacks against Transfer Learning with Pre-trained Deep
  Learning Models
Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
47
103
0
10 Jan 2020
Attack-Resistant Federated Learning with Residual-based Reweighting
Attack-Resistant Federated Learning with Residual-based Reweighting
Shuhao Fu
Chulin Xie
Yue Liu
Qifeng Chen
FedMLAAML
78
93
0
24 Dec 2019
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
68
389
0
05 Dec 2019
Deep Probabilistic Models to Detect Data Poisoning Attacks
Deep Probabilistic Models to Detect Data Poisoning Attacks
Mahesh Subedar
Nilesh A. Ahuja
R. Krishnan
I. Ndiour
Omesh Tickoo
AAMLTDI
45
24
0
03 Dec 2019
Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor
  Attacks in Deep Neural Networks
Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks
Alvin Chan
Yew-Soon Ong
AAML
57
42
0
19 Nov 2019
Can You Really Backdoor Federated Learning?
Can You Really Backdoor Federated Learning?
Ziteng Sun
Peter Kairouz
A. Suresh
H. B. McMahan
FedML
75
572
0
18 Nov 2019
NeuronInspect: Detecting Backdoors in Neural Networks via Output
  Explanations
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
Xijie Huang
M. Alzantot
Mani B. Srivastava
AAML
72
105
0
18 Nov 2019
Robust Anomaly Detection and Backdoor Attack Detection Via Differential
  Privacy
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
Min Du
R. Jia
D. Song
AAML
72
176
0
16 Nov 2019
Adversarial Defense via Local Flatness Regularization
Adversarial Defense via Local Flatness Regularization
Jia Xu
Yiming Li
Yong Jiang
Shutao Xia
AAML
63
17
0
27 Oct 2019
Trojan Attacks on Wireless Signal Classification with Adversarial
  Machine Learning
Trojan Attacks on Wireless Signal Classification with Adversarial Machine Learning
Kemal Davaslioglu
Y. Sagduyu
AAML
38
58
0
23 Oct 2019
Defending Neural Backdoors via Generative Distribution Modeling
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
49
183
0
10 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
81
323
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
81
624
0
30 Sep 2019
TBT: Targeted Neural Network Attack with Bit Trojan
TBT: Targeted Neural Network Attack with Bit Trojan
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
59
214
0
10 Sep 2019
Invisible Backdoor Attacks on Deep Neural Networks via Steganography and
  Regularization
Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization
Shaofeng Li
Minhui Xue
Benjamin Zi Hao Zhao
Haojin Zhu
Dali Kaafar
51
60
0
06 Sep 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
47
66
0
09 Aug 2019
Model Agnostic Defence against Backdoor Attacks in Machine Learning
Model Agnostic Defence against Backdoor Attacks in Machine Learning
Sakshi Udeshi
Shanshan Peng
Gerald Woo
Lionell Loh
Louth Rawshan
Sudipta Chattopadhyay
AAML
46
104
0
06 Aug 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan
  Backdoors in AI Systems
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
D. Song
71
229
0
02 Aug 2019
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor
  Contamination Detection
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
AAML
61
201
0
02 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
60
234
0
26 Jun 2019
Mimic and Fool: A Task Agnostic Adversarial Attack
Mimic and Fool: A Task Agnostic Adversarial Attack
Akshay Chaturvedi
Utpal Garain
AAML
47
27
0
11 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedMLAAML
87
152
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
81
329
0
29 May 2019
Fooling automated surveillance cameras: adversarial patches to attack
  person detection
Fooling automated surveillance cameras: adversarial patches to attack person detection
Simen Thys
W. V. Ranst
Toon Goedemé
AAML
107
569
0
18 Apr 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
67
73
0
18 Apr 2019
Efficient Decision-based Black-box Adversarial Attacks on Face
  Recognition
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
Yinpeng Dong
Hang Su
Baoyuan Wu
Zhifeng Li
Wen Liu
Tong Zhang
Jun Zhu
CVBMAAML
77
407
0
09 Apr 2019
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
Panagiota Kiourti
Kacper Wardega
Susmit Jha
Wenchao Li
AAML
49
52
0
01 Mar 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
77
809
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
152
2,039
0
08 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
137
2,551
0
24 Jan 2019
Adversarial Sample Detection for Deep Neural Network through Model
  Mutation Testing
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Jingyi Wang
Guoliang Dong
Jun Sun
Xinyu Wang
Peixin Zhang
AAML
43
191
0
14 Dec 2018
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
71
120
0
07 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
219
292
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
280
1,054
0
29 Nov 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
89
796
0
09 Nov 2018
Spectral Signatures in Backdoor Attacks
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
91
789
0
01 Nov 2018
Backdoor Embedding in Convolutional Neural Network Models via Invisible
  Perturbation
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
87
313
0
30 Aug 2018
How To Backdoor Federated Learning
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILMFedML
97
1,913
0
02 Jul 2018
Previous
1234
Next