ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.12185
  4. Cited By
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

30 May 2018
Kang Liu
Brendan Dolan-Gavitt
S. Garg
    AAML
ArXivPDFHTML

Papers citing "Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks"

50 / 217 papers shown
Title
Robust Backdoor Attacks against Deep Neural Networks in Real Physical
  World
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
34
43
0
15 Apr 2021
A Backdoor Attack against 3D Point Cloud Classifiers
A Backdoor Attack against 3D Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPC
AAML
33
76
0
12 Apr 2021
Backdoor Attack in the Physical World
Backdoor Attack in the Physical World
Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shutao Xia
29
109
0
06 Apr 2021
Privacy and Trust Redefined in Federated Machine Learning
Privacy and Trust Redefined in Federated Machine Learning
Pavlos Papadopoulos
Will Abramson
A. Hall
Nikolaos Pitropakis
William J. Buchanan
33
42
0
29 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
21
112
0
24 Mar 2021
TOP: Backdoor Detection in Neural Networks via Transferability of
  Perturbation
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Todd P. Huster
E. Ekwedike
SILM
36
19
0
18 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
30
8
0
16 Mar 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
25
99
0
09 Mar 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
39
117
0
09 Feb 2021
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
P. Bajcsy
Michael Majurski
AAML
42
8
0
22 Jan 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Muhammad Shafique
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Muhammad Shafique
BDL
59
140
0
21 Dec 2020
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
31
154
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor
  Attacks for Data Collection Scenarios
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
17
12
0
14 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
24
191
0
13 Dec 2020
Robustness and Transferability of Universal Attacks on Compressed Models
Robustness and Transferability of Universal Attacks on Compressed Models
Alberto G. Matachana
Kenneth T. Co
Luis Muñoz-González
David Martínez
Emil C. Lupu
AAML
29
10
0
10 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
355
0
07 Dec 2020
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal
  Trigger's Adversarial Attacks
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks
Thai Le
Noseong Park
Dongwon Lee
10
23
0
20 Nov 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
45
264
0
20 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
33
14
0
04 Nov 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
27
4
0
30 Oct 2020
Mitigating Backdoor Attacks in Federated Learning
Mitigating Backdoor Attacks in Federated Learning
Chen Wu
Xian Yang
Sencun Zhu
P. Mitra
FedML
AAML
28
104
0
28 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
33
420
0
16 Oct 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural
  Networks for Detection and Training Set Cleansing
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
35
22
0
15 Oct 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
48
86
0
04 Aug 2020
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Xiaoyu Zhang
Ajmal Mian
Rohit Gupta
Nazanin Rahnavard
M. Shah
AAML
30
26
0
28 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
28
13
0
16 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
504
0
05 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
17
38
0
01 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
112
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
211
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
37
434
0
14 Apr 2020
A Survey of Convolutional Neural Networks: Analysis, Applications, and
  Prospects
A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects
Zewen Li
Wenjie Yang
Shouheng Peng
Fan Liu
HAI
3DV
62
2,600
0
01 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
24
1
0
24 Mar 2020
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
  Learning-based Traffic Congestion Control Systems
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems
Yue Wang
Esha Sarkar
Wenqing Li
Michail Maniatakos
Saif Eddin Jabari
AAML
23
60
0
17 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
13
71
0
09 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
X. Lin
Xue Lin
AAML
21
47
0
26 Feb 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
6
20
0
19 Feb 2020
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
Previous
12345
Next