ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.04749
  4. Cited By
Defending Neural Backdoors via Generative Distribution Modeling
v1v2 (latest)

Defending Neural Backdoors via Generative Distribution Modeling

10 October 2019
Ximing Qiao
Yukun Yang
H. Li
    AAML
ArXiv (abs)PDFHTML

Papers citing "Defending Neural Backdoors via Generative Distribution Modeling"

50 / 107 papers shown
Title
BagFlip: A Certified Defense against Data Poisoning
BagFlip: A Certified Defense against Data Poisoning
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
76
24
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
On Collective Robustness of Bagging Against Data Poisoning
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
100
23
0
26 May 2022
Model-Contrastive Learning for Backdoor Defense
Model-Contrastive Learning for Backdoor Defense
Zhihao Yue
Jun Xia
Zhiwei Ling
Ming Hu
Ting Wang
Xian Wei
Mingsong Chen
AAML
50
3
0
09 May 2022
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms
  and Research Challenges
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges
Zhenghua Chen
Min-man Wu
Alvin Chan
Xiaoli Li
Yew-Soon Ong
51
7
0
08 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
93
127
0
04 May 2022
Towards Effective and Robust Neural Trojan Defenses via Input Filtering
Towards Effective and Robust Neural Trojan Defenses via Input Filtering
Kien Do
Haripriya Harikumar
Hung Le
D. Nguyen
T. Tran
Santu Rana
Dang Nguyen
Willy Susilo
Svetha Venkatesh
AAML
58
13
0
24 Feb 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
76
26
0
15 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAMLFedML
71
194
0
05 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial
  Invertible Transformations
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
Weiqi Peng
Jinghui Chen
AAML
67
5
0
03 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
110
29
0
25 Jan 2022
Backdoor Defense with Machine Unlearning
Backdoor Defense with Machine Unlearning
Yang Liu
Mingyuan Fan
Cen Chen
Ximeng Liu
Zhuo Ma
Li Wang
Jianfeng Ma
AAML
83
79
0
24 Jan 2022
A General Framework for Defending Against Backdoor Attacks via Influence
  Graph
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Leilei Gan
Chun Fan
AAMLTDI
74
5
0
29 Nov 2021
An Overview of Backdoor Attacks Against Deep Neural Networks and
  Possible Defences
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
Wei Guo
B. Tondi
Mauro Barni
AAML
110
69
0
16 Nov 2021
BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
  Neural Networks
BioLeaF: A Bio-plausible Learning Framework for Training of Spiking Neural Networks
Yukun Yang
Peng Li
74
3
0
14 Nov 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
119
290
0
27 Oct 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
89
14
0
12 Sep 2021
Poison Ink: Robust and Invisible Backdoor Attack
Poison Ink: Robust and Invisible Backdoor Attack
Jie Zhang
Dongdong Chen
Qidong Huang
Jing Liao
Weiming Zhang
Huamin Feng
G. Hua
Nenghai Yu
AAML
78
90
0
05 Aug 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Leilei Gan
Jiwei Li
Tianwei Zhang
AAMLSILM
103
52
0
03 Jun 2021
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial
  Perturbations
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations
Mingfu Xue
Yinghao Wu
Zhiyu Wu
Yushu Zhang
Jian Wang
Weiqiang Liu
AAML
54
12
0
29 May 2021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Fanchao Qi
Mukai Li
Yangyi Chen
Zhengyan Zhang
Zhiyuan Liu
Yasheng Wang
Maosong Sun
SILM
91
235
0
26 May 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
112
154
0
01 May 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
56
10
0
29 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
91
114
0
24 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
97
8
0
16 Mar 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
126
119
0
09 Feb 2021
On Provable Backdoor Defense in Collaborative Learning
On Provable Backdoor Defense in Collaborative Learning
Ximing Qiao
Yuhua Bai
S. Hu
Ang Li
Yiran Chen
H. Li
AAMLFedML
20
1
0
19 Jan 2021
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space
Shihao Zhao
Xingjun Ma
Yisen Wang
James Bailey
Yue Liu
Yu-Gang Jiang
AAML
66
15
0
18 Jan 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Xingjun Ma
AAMLFedML
124
441
0
15 Jan 2021
Explainability Matters: Backdoor Attacks on Medical Imaging
Explainability Matters: Backdoor Attacks on Medical Imaging
Munachiso Nwadike
Takumi Miyawaki
Esha Sarkar
Michail Maniatakos
Farah E. Shamout
AAML
70
14
0
30 Dec 2020
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
103
160
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
129
282
0
18 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
97
195
0
13 Dec 2020
Invisible Backdoor Attack with Sample-Specific Triggers
Invisible Backdoor Attack with Sample-Specific Triggers
Yuezun Li
Yiming Li
Baoyuan Wu
Longkang Li
Ran He
Siwei Lyu
AAMLDiffM
109
495
0
07 Dec 2020
Detecting Trojaned DNNs Using Counterfactual Attributions
Detecting Trojaned DNNs Using Counterfactual Attributions
Karan Sikka
Indranil Sur
Susmit Jha
Anirban Roy
Ajay Divakaran
AAML
38
13
0
03 Dec 2020
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal
  Trigger's Adversarial Attacks
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks
Thai Le
Noseong Park
Dongwon Lee
167
24
0
20 Nov 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
109
283
0
20 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
74
14
0
04 Nov 2020
On Evaluating Neural Network Backdoor Defenses
On Evaluating Neural Network Backdoor Defenses
A. Veldanda
S. Garg
AAML
77
8
0
23 Oct 2020
Poisoned classifiers are not only backdoored, they are fundamentally
  broken
Poisoned classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun
Siddhant Agarwal
J. Zico Kolter
61
26
0
18 Oct 2020
Embedding and Extraction of Knowledge in Tree Ensemble Classifiers
Embedding and Extraction of Knowledge in Tree Ensemble Classifiers
Wei Huang
Xingyu Zhao
Xiaowei Huang
AAML
68
11
0
16 Oct 2020
A Framework of Randomized Selection Based Certified Defenses Against
  Data Poisoning Attacks
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks
Ruoxin Chen
Jie Li
Chentao Wu
Bin Sheng
Ping Li
AAML
61
12
0
18 Sep 2020
One-pixel Signature: Characterizing CNN Models for Backdoor Detection
One-pixel Signature: Characterizing CNN Models for Backdoor Detection
Shanjiaoyang Huang
Weiqi Peng
Zhiwei Jia
Zhuowen Tu
58
64
0
18 Aug 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
127
233
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
176
614
0
17 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
97
14
0
16 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILMAAML
141
160
0
05 Jul 2020
Natural Backdoor Attack on Text Data
Natural Backdoor Attack on Text Data
Lichao Sun
SILM
81
41
0
29 Jun 2020
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Kaidi Jin
Tianwei Zhang
Chao Shen
Yufei Chen
Ming Fan
Chenhao Lin
Ting Liu
AAML
43
14
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
126
205
0
25 Jun 2020
FaceHack: Triggering backdoored facial recognition systems using facial
  characteristics
FaceHack: Triggering backdoored facial recognition systems using facial characteristics
Esha Sarkar
Hadjer Benkraouda
Michail Maniatakos
AAML
78
39
0
20 Jun 2020
Previous
123
Next