Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1712.05526
Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"
50 / 361 papers shown
Title
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
19
18
0
23 Oct 2020
Mitigating Sybil Attacks on Differential Privacy based Federated Learning
Yupeng Jiang
Yong Li
Yipeng Zhou
Xi Zheng
FedML
AAML
26
15
0
20 Oct 2020
From Distributed Machine Learning To Federated Learning: In The View Of Data Privacy And Security
Sheng Shen
Tianqing Zhu
Di Wu
Wei Wang
Wanlei Zhou
FedML
OOD
23
77
0
19 Oct 2020
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
33
420
0
16 Oct 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
27
22
0
15 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
17
56
0
06 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
48
86
0
04 Aug 2020
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
25
148
0
31 Jul 2020
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Xiaoyu Zhang
Ajmal Mian
Rohit Gupta
Nazanin Rahnavard
M. Shah
AAML
30
26
0
28 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
28
13
0
16 Jul 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
125
0
11 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
504
0
05 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
17
38
0
01 Jul 2020
Natural Backdoor Attack on Text Data
Lichao Sun
SILM
13
39
0
29 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
29
200
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
112
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
211
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
24
39
0
10 Jun 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
23
105
0
01 May 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
34
56
0
28 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
33
434
0
14 Apr 2020
An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies
David Enthoven
Zaid Al-Ars
FedML
60
50
0
01 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
19
1
0
24 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
54
70
0
19 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
13
71
0
09 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Radioactive data: tracing through training
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
38
74
0
03 Feb 2020
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
26
9
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
24
106
0
17 Nov 2019
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
35
76
0
30 Oct 2019
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
19
183
0
10 Oct 2019
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
22
23
0
27 Aug 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
21
66
0
09 Aug 2019
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Wenting Zheng
Raluca A. Popa
Joseph E. Gonzalez
Ion Stoica
FedML
27
144
0
16 Jul 2019
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
24
34
0
14 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
319
0
29 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
28
46
0
08 Apr 2019
Design of intentional backdoors in sequential models
Zhaoyuan Yang
N. Iyer
Johan Reimann
Nurali Virani
SILM
AAML
17
38
0
26 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
24
347
0
12 Feb 2019
Previous
1
2
3
4
5
6
7
8
Next