Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.12557
Cited By
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
22 June 2020
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks"
32 / 32 papers shown
Title
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
46
0
0
08 May 2025
Mitigating Backdoor Triggered and Targeted Data Poisoning Attacks in Voice Authentication Systems
Alireza Mohammadi
Keshav Sood
D. Thiruvady
A. Nazari
AAML
42
0
0
06 May 2025
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Ang Li
Yin Zhou
Vethavikashini Chithrra Raghuram
Tom Goldstein
Micah Goldblum
AAML
83
7
0
12 Feb 2025
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
40
11
0
02 May 2024
Robust Survival Analysis with Adversarial Regularization
Michael Potter
Stefano Maxenti
Michael Everett
AAML
24
0
0
26 Dec 2023
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Yiming Li
Mingyan Zhu
Junfeng Guo
Tao Wei
Shu-Tao Xia
Zhan Qin
AAML
68
1
0
03 Dec 2023
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
18
24
0
31 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
27
4
0
13 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
14
1
0
05 Dec 2022
Deep Fake Detection, Deterrence and Response: Challenges and Opportunities
Amin Azmoodeh
Ali Dehghantanha
42
2
0
26 Nov 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
50
98
0
27 Sep 2022
A Systematic Evaluation of Node Embedding Robustness
Alexandru Mara
Jefrey Lijffijt
Stephan Günnemann
T. D. Bie
AAML
19
0
0
16 Sep 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
27
27
0
14 Aug 2022
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution
Zhixin Pan
Prabhat Mishra
AAML
13
4
0
18 May 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu
Linyi Li
Chejian Xu
Huan Zhang
B. Kailkhura
K. Kenthapadi
Ding Zhao
Bo-wen Li
AAML
OffRL
24
34
0
16 Mar 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
81
16
0
31 Jan 2022
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
37
21
0
12 Jan 2022
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
20
69
0
17 Sep 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
20
10
0
12 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
18
270
0
18 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
6
127
0
18 Nov 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
20
105
0
01 May 2020
1