Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.12185
Cited By
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
30 May 2018
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks"
50 / 217 papers shown
Title
On the Permanence of Backdoors in Evolving Models
Huiying Li
A. Bhagoji
Yuxin Chen
Haitao Zheng
Ben Y. Zhao
AAML
29
2
0
08 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
29
10
0
31 May 2022
BadDet: Backdoor Attacks on Object Detection
Shih-Han Chan
Yinpeng Dong
Junyi Zhu
Xiaolu Zhang
Jun Zhou
AAML
27
56
0
28 May 2022
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution
Zhixin Pan
Prabhat Mishra
AAML
15
4
0
18 May 2022
Verifying Neural Networks Against Backdoor Attacks
Long H. Pham
Jun Sun
AAML
26
5
0
14 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Imperceptible Backdoor Attack: From Input Space to Feature Representation
Nan Zhong
Zhenxing Qian
Xinpeng Zhang
AAML
19
52
0
06 May 2022
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
22
169
0
11 Apr 2022
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu
Linyi Li
Chejian Xu
Huan Zhang
B. Kailkhura
K. Kenthapadi
Ding Zhao
Bo-wen Li
AAML
OffRL
32
34
0
16 Mar 2022
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving
Xingshuo Han
Guowen Xu
Yuanpu Zhou
Xuehuan Yang
Jiwei Li
Tianwei Zhang
AAML
30
43
0
02 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Guangyu Shen
Yingqi Liu
Guanhong Tao
Qiuling Xu
Zhuo Zhang
Shengwei An
Shiqing Ma
Xinming Zhang
AAML
21
34
0
11 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
23
36
0
11 Feb 2022
Preserving Privacy and Security in Federated Learning
Truc D. T. Nguyen
My T. Thai
FedML
24
49
0
07 Feb 2022
Few-Shot Backdoor Attacks on Visual Object Tracking
Yiming Li
Haoxiang Zhong
Xingjun Ma
Yong Jiang
Shutao Xia
AAML
38
53
0
31 Jan 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
21
53
0
27 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
24
28
0
25 Jan 2022
FedComm: Federated Learning as a Medium for Covert Communication
Dorjan Hitaj
Giulio Pagnotta
Briland Hitaj
Fernando Perez-Cruz
L. Mancini
FedML
32
10
0
21 Jan 2022
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Hua Ma
Yinshan Li
Yansong Gao
A. Abuadbba
Zhi-Li Zhang
Anmin Fu
Hyoungshick Kim
S. Al-Sarawi
N. Surya
Derek Abbott
21
34
0
21 Jan 2022
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Zhen Xiang
David J. Miller
G. Kesidis
AAML
39
47
0
20 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
33
0
0
18 Jan 2022
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
33
78
0
09 Dec 2021
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
Yu Feng
Benteng Ma
Jing Zhang
Shanshan Zhao
Yong-quan Xia
Dacheng Tao
AAML
49
84
0
02 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
39
42
0
01 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
33
57
0
25 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
24
33
0
22 Nov 2021
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Leilei Gan
Jiwei Li
Tianwei Zhang
Xiaoya Li
Yuxian Meng
Fei Wu
Yi Yang
Shangwei Guo
Chun Fan
AAML
SILM
27
74
0
15 Nov 2021
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
31
118
0
30 Oct 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
51
275
0
27 Oct 2021
Semantic Host-free Trojan Attack
Haripriya Harikumar
Kien Do
Santu Rana
Sunil R. Gupta
Svetha Venkatesh
25
1
0
26 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Securing Federated Learning: A Covert Communication-based Approach
Yuan-ai Xie
Jiawen Kang
Dusit Niyato
Nguyen Thi Thanh Van
Nguyen Cong Luong
Zhixin Liu
Han Yu
FedML
42
25
0
05 Oct 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks
Kiran Karra
C. Ashcraft
Cash Costello
AAML
35
0
0
09 Sep 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
21
35
0
03 Sep 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
38
19
0
20 Aug 2021
Regulating Ownership Verification for Deep Neural Networks: Scenarios, Protocols, and Prospects
Fangqi Li
Shi-Lin Wang
Alan Wee-Chung Liew
24
8
0
20 Aug 2021
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Nils Lukas
Edward Jiang
Xinda Li
Florian Kerschbaum
AAML
36
87
0
11 Aug 2021
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models
Ambrish Rawat
Killian Levacher
M. Sinn
AAML
30
11
0
03 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
42
152
0
01 Aug 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
38
8
0
22 Jul 2021
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
Xiangyu Qi
Jifeng Zhu
Chulin Xie
Yong-Liang Yang
AAML
66
35
0
15 Jul 2021
Immunization of Pruning Attack in DNN Watermarking Using Constant Weight Code
Minoru Kuribayashi
Tatsuya Yasui
Asad U. Malik
N. Funabiki
AAML
23
1
0
07 Jul 2021
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance
Jack W. Stokes
P. England
K. Kane
AAML
18
14
0
20 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
164
68
0
04 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
24
149
0
22 Apr 2021
Previous
1
2
3
4
5
Next