Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.10807
Cited By
Adversarial Examples Make Strong Poisons
21 June 2021
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adversarial Examples Make Strong Poisons"
46 / 96 papers shown
Title
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Zhengyue Zhao
Jinhao Duan
Xingui Hu
Kaidi Xu
Chenan Wang
Rui Zhang
Zidong Du
Qi Guo
Yunji Chen
DiffM
WIGM
28
27
0
02 Jun 2023
What Can We Learn from Unlearnable Datasets?
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
24
14
0
30 May 2023
Sharpness-Aware Data Poisoning Attack
Pengfei He
Han Xu
J. Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Jiliang Tang
AAML
44
7
0
24 May 2023
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Simin Li
Shuing Zhang
Gujun Chen
Dong Wang
Pu Feng
Jiakai Wang
Aishan Liu
Xin Yi
Xianglong Liu
AAML
9
18
0
22 May 2023
Towards Generalizable Data Protection With Transferable Unlearnable Examples
Bin Fang
Bo-wen Li
Shuang Wu
Tianyi Zheng
Shouhong Ding
Ran Yi
Lizhuang Ma
19
4
0
18 May 2023
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo-wen Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He-Nan Wang
Jianxin Sun
Hao Wu
Richang Hong
37
18
0
16 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
36
1
0
07 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
21
2
0
30 Apr 2023
Probably Approximately Correct Federated Learning
Xiaojin Zhang
Anbu Huang
Lixin Fan
Kai Chen
Qiang Yang
FedML
27
5
0
10 Apr 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
54
17
0
15 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
39
18
0
07 Mar 2023
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Zhuoran Liu
Zhengyu Zhao
Martha Larson
32
34
0
31 Jan 2023
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
Training Data Influence Analysis and Estimation: A Survey
Zayd Hammoudeh
Daniel Lowd
TDI
29
82
0
09 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Sizhe Chen
Geng Yuan
Xinwen Cheng
Yifan Gong
Minghai Qin
Yanzhi Wang
X. Huang
AAML
30
20
0
22 Nov 2022
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning
Ziyao Wang
Thai Le
Dongwon Lee
36
1
0
17 Nov 2022
Differentially Private Optimizers Can Learn Adversarially Robust Models
Yuan Zhang
Zhiqi Bu
18
3
0
16 Nov 2022
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
26
2
0
02 Nov 2022
Transferable Unlearnable Examples
J. Ren
Han Xu
Yuxuan Wan
Xingjun Ma
Lichao Sun
Jiliang Tang
36
36
0
18 Oct 2022
Autoregressive Perturbations for Data Poisoning
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
David Jacobs
AAML
27
40
0
08 Jun 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
45
27
0
24 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
22
116
0
04 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Poisons that are learned faster are more effective
Pedro Sandoval-Segura
Vasu Singla
Liam H. Fowl
Jonas Geiping
Micah Goldblum
David Jacobs
Tom Goldstein
8
17
0
19 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
38
107
0
31 Mar 2022
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
Shaopeng Fu
Fengxiang He
Yang Liu
Li Shen
Dacheng Tao
24
24
0
28 Mar 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
Weiqi Peng
Jinghui Chen
AAML
18
5
0
03 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
86
16
0
31 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
Towards Adversarial Evaluations for Inexact Machine Unlearning
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip Torr
Ponnurangam Kumaraguru
AAML
ELM
MU
29
47
0
17 Jan 2022
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples
Zhuoran Liu
Zhengyu Zhao
A. Kolmus
Tijn Berns
Twan van Laarhoven
Tom Heskes
Martha Larson
AAML
39
6
0
25 Nov 2021
Fooling Adversarial Training with Inducing Noise
Zhirui Wang
Yifei Wang
Yisen Wang
22
14
0
19 Nov 2021
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data
Eugenia Iofinova
Nikola Konstantinov
Christoph H. Lampert
FaML
36
0
0
22 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
25
10
0
12 Jun 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
34
71
0
09 Feb 2021
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
With False Friends Like These, Who Can Notice Mistakes?
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
18
5
0
29 Dec 2020
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Bingyin Zhao
Yingjie Lao
SILM
AAML
9
18
0
31 Jul 2020
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
191
273
0
03 Dec 2018
Previous
1
2