Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.09527
Cited By
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
18 November 2020
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff"
28 / 28 papers shown
Title
Adversarial Hubness in Multi-Modal Retrieval
Tingwei Zhang
Fnu Suya
Rishi Jha
Collin Zhang
Vitaly Shmatikov
AAML
85
1
0
18 Dec 2024
Securing Voice Authentication Applications Against Targeted Data Poisoning
Alireza Mohammadi
Keshav Sood
D. Thiruvady
A. Nazari
AAML
31
0
0
25 Jun 2024
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
37
6
0
10 Jun 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
46
3
0
09 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
34
0
0
26 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
24
14
0
16 Jan 2023
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
Yulin Zhu
Liang Tong
Gaolei Li
Xiapu Luo
Kai Zhou
33
4
0
25 Oct 2022
Augmentation Backdoors
J. Rance
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
AAML
SILM
53
7
0
29 Sep 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
30
27
0
14 Aug 2022
Data-free Backdoor Removal based on Channel Lipschitzness
Runkai Zheng
Rong Tang
Jianze Li
Li Liu
AAML
21
104
0
05 Aug 2022
Interpolating Compressed Parameter Subspaces
Siddhartha Datta
N. Shadbolt
37
5
0
19 May 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
27
187
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
98
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
32
7
0
28 Jan 2022
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
37
42
0
01 Dec 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
34
275
0
27 Oct 2021
Survey: Image Mixing and Deleting for Data Augmentation
Humza Naveed
Saeed Anwar
Munawar Hayat
Kashif Javed
Ajmal Mian
38
78
0
13 Jun 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
30
8
0
16 Mar 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
1