ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.08890
  4. Cited By
Defenses in Adversarial Machine Learning: A Survey

Defenses in Adversarial Machine Learning: A Survey

13 December 2023
Baoyuan Wu
Shaokui Wei
Mingli Zhu
Meixi Zheng
Zihao Zhu
Ruotong Wang
Hongrui Chen
Danni Yuan
Li Liu
Qingshan Liu
    AAML
ArXiv (abs)PDFHTML

Papers citing "Defenses in Adversarial Machine Learning: A Survey"

50 / 101 papers shown
Title
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
Peter Lorenz
Mario Fernandez
Jens Müller
Ullrich Kothe
AAML
195
1
0
21 Jun 2024
Domain Watermark: Effective and Harmless Dataset Copyright Protection is
  Closed at Hand
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
78
61
0
09 Oct 2023
Towards Stable Backdoor Purification through Feature Shift Tuning
Towards Stable Backdoor Purification through Feature Shift Tuning
Rui Min
Zeyu Qin
Li Shen
Minhao Cheng
AAML
90
22
0
03 Oct 2023
Robust Principles: Architectural Design Principles for Adversarially
  Robust CNNs
Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Sheng-Hsuan Peng
Weilin Xu
Cory Cornelius
Matthew Hull
Kevin Wenliang Li
Rahul Duggal
Mansi Phute
Jason Martin
Duen Horng Chau
AAML
65
48
0
30 Aug 2023
Towards Attack-tolerant Federated Learning via Critical Parameter
  Analysis
Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Sungwon Han
Sungwon Park
Fangzhao Wu
Sundong Kim
Bin Zhu
Xing Xie
Meeyoung Cha
FedML
63
10
0
18 Aug 2023
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
  Adversarial Examples
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Shaokui Wei
Ruotong Wang
H. Zha
Baoyuan Wu
TPM
81
38
0
20 Jul 2023
Detecting Backdoors in Pre-trained Encoders
Detecting Backdoors in Pre-trained Encoders
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
117
53
0
23 Mar 2023
Randomized Adversarial Training via Taylor Expansion
Randomized Adversarial Training via Taylor Expansion
Gao Jin
Xinping Yi
Dengyu Wu
Ronghui Mu
Xiaowei Huang
AAML
79
35
0
19 Mar 2023
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Jialai Wang
Ziyuan Zhang
Meiqi Wang
Han Qiu
Tianwei Zhang
Qi Li
Zongpeng Li
Tao Wei
Chao Zhang
AAML
71
22
0
27 Feb 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep
  Learning Paradigms
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
Xinyu Lin
R. Jia
AAML
67
37
0
22 Feb 2023
Data Augmentation Alone Can Improve Adversarial Training
Data Augmentation Alone Can Improve Adversarial Training
Lin Li
Michael W. Spratling
56
55
0
24 Jan 2023
Adversarial training with informed data selection
Adversarial training with informed data selection
Marcele O. K. Mendonça
Javier Maroto
P. Frossard
P. Diniz
AAML
31
4
0
07 Jan 2023
DISCO: Adversarial Defense with Local Implicit Functions
DISCO: Adversarial Defense with Local Implicit Functions
Chih-Hui Ho
Nuno Vasconcelos
AAML
100
39
0
11 Dec 2022
Revisiting Outer Optimization in Adversarial Training
Revisiting Outer Optimization in Adversarial Training
Ali Dabouei
Fariborz Taherkhani
Sobhan Soleymani
Nasser M. Nasrabadi
AAML
83
4
0
02 Sep 2022
One-vs-the-Rest Loss to Focus on Important Samples in Adversarial
  Training
One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training
Sekitoshi Kanai
Shin'ya Yamaguchi
Masanori Yamada
Hiroshi Takahashi
Kentaro Ohno
Yasutoshi Ida
AAML
58
9
0
21 Jul 2022
One-shot Neural Backdoor Erasing via Adversarial Weight Masking
One-shot Neural Backdoor Erasing via Adversarial Weight Masking
Shuwen Chai
Jinghui Chen
AAML
75
35
0
10 Jul 2022
Improving Adversarial Robustness by Putting More Regularizations on Less
  Robust Samples
Improving Adversarial Robustness by Putting More Regularizations on Less Robust Samples
Dongyoon Yang
Insung Kong
Yongdai Kim
OODAAML
69
10
0
07 Jun 2022
Robust Weight Perturbation for Adversarial Training
Robust Weight Perturbation for Adversarial Training
Chaojian Yu
Bo Han
Biwei Huang
Li Shen
Shiming Ge
Bo Du
Tongliang Liu
AAML
64
36
0
30 May 2022
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi
Tinghao Xie
Jiachen T. Wang
Tong Wu
Saeed Mahloujifar
Prateek Mittal
AAML
63
52
0
26 May 2022
Diffusion Models for Adversarial Purification
Diffusion Models for Adversarial Purification
Weili Nie
Brandon Guo
Yujia Huang
Chaowei Xiao
Arash Vahdat
Anima Anandkumar
WIGM
269
450
0
16 May 2022
A Survey of Robust Adversarial Training in Pattern Recognition:
  Fundamental, Theory, and Methodologies
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OODAAMLObjD
103
73
0
26 Mar 2022
Self-Ensemble Adversarial Training for Improved Robustness
Self-Ensemble Adversarial Training for Improved Robustness
Hongjun Wang
Yisen Wang
OODAAML
54
50
0
18 Mar 2022
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Tianyu Pang
Min Lin
Xiao Yang
Junyi Zhu
Shuicheng Yan
120
123
0
21 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAMLFedML
56
194
0
05 Feb 2022
Boundary Defense Against Black-box Adversarial Attacks
Boundary Defense Against Black-box Adversarial Attacks
Manjushree B. Aithal
Xiaohua Li
AAML
62
6
0
31 Jan 2022
On the Convergence and Robustness of Adversarial Training
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
273
348
0
15 Dec 2021
Mutual Adversarial Training: Learning together is better than going
  alone
Mutual Adversarial Training: Learning together is better than going alone
Jiang-Long Liu
Chun Pong Lau
Hossein Souri
Soheil Feizi
Ramalingam Chellappa
OODAAML
65
25
0
09 Dec 2021
$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
  Training
ℓ∞\ell_\inftyℓ∞​-Robustness and Beyond: Unleashing Efficient Adversarial Training
H. M. Dolatabadi
S. Erfani
C. Leckie
OODAAML
67
12
0
01 Dec 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value
  Analysis
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
126
76
0
28 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Yue Liu
Xingjun Ma
OnRL
91
336
0
22 Oct 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
104
302
0
18 Oct 2021
Trigger Hunting with a Topological Prior for Trojan Detection
Trigger Hunting with a Topological Prior for Trojan Detection
Xiaoling Hu
Xiaoyu Lin
Michael Cogswell
Yi Yao
Susmit Jha
Chao Chen
AAML
51
46
0
15 Oct 2021
Parameterizing Activation Functions for Adversarial Robustness
Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai
Saeed Mahloujifar
Prateek Mittal
AAML
79
32
0
11 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAMLTPM
112
102
0
07 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
129
177
0
07 Oct 2021
Adversarial purification with Score-based generative models
Adversarial purification with Score-based generative models
Jongmin Yoon
Sung Ju Hwang
Juho Lee
DiffM
90
158
0
11 Jun 2021
Attacking Adversarial Attacks as A Defense
Attacking Adversarial Attacks as A Defense
Boxi Wu
Heng Pan
Li Shen
Jindong Gu
Shuai Zhao
Zhifeng Li
Deng Cai
Xiaofei He
Wei Liu
AAML
71
32
0
09 Jun 2021
Improved OOD Generalization via Adversarial Training and Pre-training
Improved OOD Generalization via Adversarial Training and Pre-training
Mingyang Yi
Lu Hou
Jiacheng Sun
Lifeng Shang
Xin Jiang
Qun Liu
Zhi-Ming Ma
VLM
70
84
0
24 May 2021
Random Noise Defense Against Query-Based Black-Box Attacks
Random Noise Defense Against Query-Based Black-Box Attacks
Zeyu Qin
Yanbo Fan
H. Zha
Baoyuan Wu
AAML
127
60
0
23 Apr 2021
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Yi Zeng
Won Park
Z. Morley Mao
R. Jia
AAML
74
215
0
07 Apr 2021
LiBRe: A Practical Bayesian Approach to Adversarial Detection
LiBRe: A Practical Bayesian Approach to Adversarial Detection
Zhijie Deng
Xiao Yang
Shizhen Xu
Hang Su
Jun Zhu
BDLAAML
68
62
0
27 Mar 2021
Combating Adversaries with Anti-Adversaries
Combating Adversaries with Anti-Adversaries
Motasem Alfarra
Juan C. Pérez
Ali K. Thabet
Adel Bibi
Philip Torr
Guohao Li
AAML
83
27
0
26 Mar 2021
Adversarial Attacks are Reversible with Natural Supervision
Adversarial Attacks are Reversible with Natural Supervision
Chengzhi Mao
Mia Chiquer
Hao Wang
Junfeng Yang
Carl Vondrick
BDLAAML
86
56
0
26 Mar 2021
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier
  Domain
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain
P. Harder
Franz-Josef Pfreundt
Margret Keuper
J. Keuper
AAML
75
50
0
04 Mar 2021
Online Adversarial Purification based on Self-Supervision
Online Adversarial Purification based on Self-Supervision
Changhao Shi
Chester Holtz
Zhengchao Wan
AAML
63
57
0
23 Jan 2021
On the Effectiveness of Small Input Noise for Defending Against
  Query-based Black-Box Attacks
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
Junyoung Byun
Hyojun Go
Changick Kim
AAML
170
21
0
13 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
109
282
0
18 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
88
75
0
07 Dec 2020
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
Ruize Gao
Feng Liu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Masashi Sugiyama
AAML
79
56
0
22 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial
  Training
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
120
112
0
05 Oct 2020
123
Next