ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.08745
  4. Cited By
Backdoor Learning: A Survey

Backdoor Learning: A Survey

17 July 2020
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
    AAML
ArXivPDFHTML

Papers citing "Backdoor Learning: A Survey"

50 / 170 papers shown
Title
Backdoor Attack in the Physical World
Backdoor Attack in the Physical World
Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shutao Xia
60
112
0
06 Apr 2021
PointBA: Towards Backdoor Attacks in 3D Point Cloud
PointBA: Towards Backdoor Attacks in 3D Point Cloud
Xinke Li
Zhirui Chen
Yue Zhao
Zekun Tong
Yabang Zhao
A. Lim
Qiufeng Wang
3DPC
AAML
127
53
0
30 Mar 2021
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
  of the Embedding Layers in NLP Models
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
Wenkai Yang
Lei Li
Zhiyuan Zhang
Xuancheng Ren
Xu Sun
Bin He
SILM
89
153
0
29 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
70
113
0
24 Mar 2021
Improving Adversarial Robustness via Channel-wise Activation Suppressing
Improving Adversarial Robustness via Channel-wise Activation Suppressing
Yang Bai
Yuyuan Zeng
Yong Jiang
Shutao Xia
Xingjun Ma
Yisen Wang
AAML
64
131
0
11 Mar 2021
Hidden Backdoor Attack against Semantic Segmentation Models
Hidden Backdoor Attack against Semantic Segmentation Models
Yiming Li
Yanjie Li
Yalei Lv
Yong Jiang
Shutao Xia
AAML
335
31
0
06 Mar 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
97
118
0
09 Feb 2021
Investigating Bi-Level Optimization for Learning and Vision from a
  Unified Perspective: A Survey and Beyond
Investigating Bi-Level Optimization for Learning and Vision from a Unified Perspective: A Survey and Beyond
Risheng Liu
Jiaxin Gao
Jin Zhang
Deyu Meng
Zhouchen Lin
AI4CE
134
228
0
27 Jan 2021
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space
Shihao Zhao
Xingjun Ma
Yisen Wang
James Bailey
Yue Liu
Yu-Gang Jiang
AAML
40
15
0
18 Jan 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yan Liang
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
106
79
0
18 Jan 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Xingjun Ma
AAML
FedML
83
436
0
15 Jan 2021
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
74
157
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
90
281
0
18 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
70
191
0
13 Dec 2020
Invisible Backdoor Attack with Sample-Specific Triggers
Invisible Backdoor Attack with Sample-Specific Triggers
Yuezun Li
Yiming Li
Baoyuan Wu
Longkang Li
Ran He
Siwei Lyu
AAML
DiffM
85
483
0
07 Dec 2020
Backdoor Attacks on the DNN Interpretation System
Backdoor Attacks on the DNN Interpretation System
Shihong Fang
A. Choromańska
FAtt
AAML
40
19
0
21 Nov 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
  Without an Accuracy Tradeoff
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
59
127
0
18 Nov 2020
Backdoor Attack against Speaker Verification
Backdoor Attack against Speaker Verification
Tongqing Zhai
Yiming Li
Zi-Mou Zhang
Baoyuan Wu
Yong Jiang
Shutao Xia
AAML
76
102
0
22 Oct 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
71
431
0
16 Oct 2020
BlockFLA: Accountable Federated Learning via Hybrid Blockchain
  Architecture
BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture
H. Desai
Mustafa Safa Ozdayi
Murat Kantarcioglu
FedML
108
61
0
14 Oct 2020
Open-sourced Dataset Protection via Backdoor Watermarking
Open-sourced Dataset Protection via Backdoor Watermarking
Yiming Li
Zi-Mou Zhang
Jiawang Bai
Baoyuan Wu
Yong Jiang
Shutao Xia
36
41
0
12 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
79
97
0
22 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
87
219
0
04 Sep 2020
CLEANN: Accelerated Trojan Shield for Embedded Neural Networks
CLEANN: Accelerated Trojan Shield for Embedded Neural Networks
Mojan Javaheripi
Mohammad Samragh
Gregory Fields
T. Javidi
F. Koushanfar
AAML
FedML
50
42
0
04 Sep 2020
One-pixel Signature: Characterizing CNN Models for Backdoor Detection
One-pixel Signature: Characterizing CNN Models for Backdoor Detection
Shanjiaoyang Huang
Weiqi Peng
Zhiwei Jia
Zhuowen Tu
44
63
0
18 Aug 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
48
131
0
11 Aug 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
99
88
0
04 Aug 2020
Removing Backdoor-Based Watermarks in Neural Networks with Limited Data
Removing Backdoor-Based Watermarks in Neural Networks with Limited Data
Xuankai Liu
Fengting Li
Bihan Wen
Qi Li
AAML
56
61
0
02 Aug 2020
Practical Detection of Trojan Neural Networks: Data-Limited and
  Data-Free Cases
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
118
148
0
31 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
93
229
0
21 Jul 2020
Deep Learning Backdoors
Deep Learning Backdoors
Shaofeng Li
Shiqing Ma
Minhui Xue
Benjamin Zi Hao Zhao
135
36
0
16 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
79
606
0
09 Jul 2020
Defending against Backdoors in Federated Learning with Robust Learning
  Rate
Defending against Backdoors in Federated Learning with Robust Learning Rate
Mustafa Safa Ozdayi
Murat Kantarcioglu
Yulia R. Gel
FedML
56
169
0
07 Jul 2020
Backdoor attacks and defenses in feature-partitioned collaborative
  learning
Backdoor attacks and defenses in feature-partitioned collaborative learning
Yang Liu
Zhi-qian Yi
Tianjian Chen
AAML
FedML
53
48
0
07 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
93
513
0
05 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
79
38
0
01 Jul 2020
Deep Partition Aggregation: Provable Defense against General Poisoning
  Attacks
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
Alexander Levine
Soheil Feizi
AAML
58
145
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
67
200
0
25 Jun 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CE
AAML
55
171
0
21 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
61
216
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
57
188
0
15 Jun 2020
Backdoor Attacks on Federated Meta-Learning
Backdoor Attacks on Federated Meta-Learning
Chien-Lun Chen
L. Golubchik
Marco Paolieri
FedML
49
32
0
12 Jun 2020
Scalable Backdoor Detection in Neural Networks
Scalable Backdoor Detection in Neural Networks
Haripriya Harikumar
Vuong Le
Santu Rana
Sourangshu Bhattacharya
Sunil R. Gupta
Svetha Venkatesh
98
24
0
10 Jun 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
71
238
0
01 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
105
304
0
08 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
116
188
0
30 Apr 2020
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from
  Deep Neural Networks
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks
William Aiken
Hyoungshick Kim
Simon S. Woo
30
64
0
22 Apr 2020
Targeted Attack for Deep Hashing based Retrieval
Targeted Attack for Deep Hashing based Retrieval
Jiawang Bai
Bin Chen
Yiming Li
Dongxian Wu
Weiwei Guo
Shutao Xia
En-Hui Yang
AAML
97
85
0
15 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
134
451
0
14 Apr 2020
Rethinking the Trigger of Backdoor Attack
Rethinking the Trigger of Backdoor Attack
Yiming Li
Tongqing Zhai
Baoyuan Wu
Yong Jiang
Zhifeng Li
Shutao Xia
LLMSV
60
150
0
09 Apr 2020
Previous
1234
Next