Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.00636
Cited By
Spectral Signatures in Backdoor Attacks
1 November 2018
Brandon Tran
Jerry Li
Aleksander Madry
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Spectral Signatures in Backdoor Attacks"
49 / 249 papers shown
Title
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Kaidi Jin
Tianwei Zhang
Chao Shen
Yufei Chen
Ming Fan
Chenhao Lin
Ting Liu
AAML
43
14
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
126
205
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
107
122
0
24 Jun 2020
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CE
AAML
72
172
0
21 Jun 2020
FaceHack: Triggering backdoored facial recognition systems using facial characteristics
Esha Sarkar
Hadjer Benkraouda
Michail Maniatakos
AAML
78
39
0
20 Jun 2020
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
105
219
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Helen Zhou
AAML
73
190
0
15 Jun 2020
Backdoor Attacks on Federated Meta-Learning
Chien-Lun Chen
L. Golubchik
Marco Paolieri
FedML
68
32
0
12 Jun 2020
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
141
58
0
11 Jun 2020
Scalable Backdoor Detection in Neural Networks
Haripriya Harikumar
Vuong Le
Santu Rana
Sourangshu Bhattacharya
Sunil R. Gupta
Svetha Venkatesh
117
24
0
10 Jun 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
152
309
0
08 May 2020
Depth-2 Neural Networks Under a Data-Poisoning Attack
Sayar Karmakar
Anirbit Mukherjee
Ramchandran Muthukumar
40
7
0
04 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
149
191
0
30 Apr 2020
Live Trojan Attacks on Deep Neural Networks
Robby Costales
Chengzhi Mao
R. Norwitz
Bryan Kim
Junfeng Yang
AAML
117
22
0
22 Apr 2020
Rethinking the Trigger of Backdoor Attack
Yiming Li
Tongqing Zhai
Baoyuan Wu
Yong Jiang
Zhifeng Li
Shutao Xia
LLMSV
99
152
0
09 Apr 2020
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
120
164
0
19 Mar 2020
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems
Yue Wang
Esha Sarkar
Wenqing Li
Michail Maniatakos
Saif Eddin Jabari
AAML
156
64
0
17 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
110
73
0
09 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
95
18
0
02 Mar 2020
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
Xinyu Lin
Xue Lin
AAML
91
49
0
26 Feb 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
24
14
0
25 Feb 2020
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
80
20
0
19 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OOD
AAML
94
159
0
07 Feb 2020
Radioactive data: tracing through training
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
99
76
0
03 Feb 2020
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
289
6,335
0
10 Dec 2019
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
85
391
0
05 Dec 2019
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
128
1,128
0
26 Nov 2019
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks
Yansong Gao
Yeonjae Kim
Bao Gia Doan
Zhi-Li Zhang
Gongxuan Zhang
Surya Nepal
Damith C. Ranasinghe
Hyoungshick Kim
AAML
74
91
0
23 Nov 2019
Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks
Alvin Chan
Yew-Soon Ong
AAML
65
43
0
19 Nov 2019
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
79
9
0
18 Nov 2019
Can You Really Backdoor Federated Learning?
Ziteng Sun
Peter Kairouz
A. Suresh
H. B. McMahan
FedML
91
582
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Yue Liu
Basel Alomair
AAML
86
109
0
17 Nov 2019
Recent Advances in Algorithmic High-Dimensional Robust Statistics
Ilias Diakonikolas
D. Kane
OOD
93
184
0
14 Nov 2019
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Ren Pang
Hua Shen
Xinyang Zhang
S. Ji
Yevgeniy Vorobeychik
Xiaopu Luo
Alex Liu
Ting Wang
AAML
52
2
0
05 Nov 2019
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
67
184
0
10 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
126
326
0
08 Oct 2019
Piracy Resistant Watermarks for Deep Neural Networks
Huiying Li
Emily Willson
Shawn Shan
Bing Ye
Shehroz S. Khan
88
26
0
02 Oct 2019
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
104
631
0
30 Sep 2019
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
Sijia Liu
Songtao Lu
Xiangyi Chen
Yao Feng
Kaidi Xu
Abdullah Al-Dujaili
Mingyi Hong
Una-May Obelilly
94
26
0
30 Sep 2019
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
103
24
0
27 Aug 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
Basel Alomair
86
230
0
02 Aug 2019
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
AAML
84
205
0
02 Aug 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
105
152
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
107
330
0
29 May 2019
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers
Ameya Joshi
Amitangshu Mukherjee
Soumik Sarkar
Chinmay Hegde
AAML
92
100
0
17 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
74
35
0
12 Apr 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
84
819
0
18 Feb 2019
A Little Is Enough: Circumventing Defenses For Distributed Learning
Moran Baruch
Gilad Baruch
Yoav Goldberg
FedML
65
514
0
16 Feb 2019
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
153
1,938
0
02 Jul 2018
Previous
1
2
3
4
5