Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.03728
Cited By
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
9 November 2018
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering"
50 / 175 papers shown
Title
Stealthy Backdoor Attack for Code Models
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
26
65
0
06 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
29
33
0
06 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
34
8
0
03 Jan 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
34
0
18 Dec 2022
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks
Khondoker Murad Hossain
Tim Oates
AAML
34
4
0
15 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Xiaofeng Wang
Haixu Tang
AAML
FedML
37
13
0
09 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
33
1
0
05 Dec 2022
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving
Yinbo Yu
Jiajia Liu
21
1
0
22 Nov 2022
Provable Defense against Backdoor Policies in Reinforcement Learning
S. Bharti
Xuezhou Zhang
Adish Singla
Xiaojin Zhu
AAML
12
19
0
18 Nov 2022
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
Poison Attack and Defense on Deep Source Code Processing Models
Jia Li
Zhuo Li
Huangzhao Zhang
Ge Li
Zhi Jin
Xing Hu
Xin Xia
AAML
48
16
0
31 Oct 2022
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang
Soroush Karami
Ousmane Amadou Dia
H. Ritter
E. Emamjomeh-Zadeh
J. Chen
Zhen Xiang
D. J. Miller
G. Kesidis
SSL
35
4
0
19 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
CrowdGuard: Federated Backdoor Detection in Federated Learning
Phillip Rieger
T. Krauß
Markus Miettinen
Alexandra Dmitrienko
Ahmad-Reza Sadeghi Technical University Darmstadt
AAML
FedML
32
22
0
14 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
24
5
0
12 Oct 2022
Detecting Backdoors in Deep Text Classifiers
Youyan Guo
Jun Wang
Trevor Cohn
SILM
42
1
0
11 Oct 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
29
7
0
07 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
56
98
0
27 Sep 2022
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
Huy Phan
Cong Shi
Yi Xie
Tian-Di Zhang
Zhuohang Li
Tianming Zhao
Jian-Dong Liu
Yan Wang
Ying-Cong Chen
Bo Yuan
AAML
35
6
0
22 Aug 2022
An anomaly detection approach for backdoored neural networks: face recognition as a case study
A. Unnervik
S´ebastien Marcel
AAML
29
4
0
22 Aug 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Natural Backdoor Datasets
Emily Wenger
Roma Bhattacharjee
A. Bhagoji
Josephine Passananti
Emilio Andere
Haitao Zheng
Ben Y. Zhao
AAML
35
4
0
21 Jun 2022
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
Damith C. Ranasinghe
S. Kanhere
AAML
49
36
0
21 Jun 2022
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
26
7
0
18 Jun 2022
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Jinyin Chen
Chengyu Jia
Haibin Zheng
Ruoxi Chen
Chenbo Fu
AAML
22
10
0
17 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
31
10
0
31 May 2022
Verifying Neural Networks Against Backdoor Attacks
Long H. Pham
Jun Sun
AAML
26
5
0
14 May 2022
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
Yinbo Yu
Jiajia Liu
Shouqing Li
Ke Huang
Xudong Feng
AAML
42
11
0
05 May 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
17
5
0
27 Apr 2022
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
Gorka Abad
Servio Paguada
Oguzhan Ersoy
S. Picek
Víctor Julio Ramírez-Durán
A. Urbieta
FedML
31
6
0
16 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
26
36
0
11 Feb 2022
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
29
187
0
05 Feb 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
36
7
0
28 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
24
28
0
25 Jan 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
FedComm: Federated Learning as a Medium for Covert Communication
Dorjan Hitaj
Giulio Pagnotta
Briland Hitaj
Fernando Perez-Cruz
L. Mancini
FedML
32
10
0
21 Jan 2022
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Zhen Xiang
David J. Miller
G. Kesidis
AAML
39
47
0
20 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
33
78
0
09 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
47
42
0
01 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
33
57
0
25 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Yangqiu Song
Ting Wang
AAML
24
33
0
22 Nov 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
65
74
0
28 Oct 2021
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Sanghyun Hong
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Tudor Dumitras
MQ
60
13
0
26 Oct 2021
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Zhensu Sun
Xiaoning Du
Fu Song
Mingze Ni
Li Li
36
68
0
25 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Previous
1
2
3
4
Next