Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2101.04898
Cited By
Unlearnable Examples: Making Personal Data Unexploitable
13 January 2021
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unlearnable Examples: Making Personal Data Unexploitable"
37 / 137 papers shown
Title
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Sizhe Chen
Geng Yuan
Xinwen Cheng
Yifan Gong
Minghai Qin
Yanzhi Wang
X. Huang
AAML
28
20
0
22 Nov 2022
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning
Ziyao Wang
Thai Le
Dongwon Lee
33
1
0
17 Nov 2022
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
23
2
0
02 Nov 2022
Transferable Unlearnable Examples
J. Ren
Han Xu
Yuxuan Wan
Xingjun Ma
Lichao Sun
Jiliang Tang
36
36
0
18 Oct 2022
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection
Simin Li
Huangxinxin Xu
Jiakai Wang
Aishan Liu
Fazhi He
Xianglong Liu
Dacheng Tao
AAML
21
5
0
23 Aug 2022
Autoregressive Perturbations for Data Poisoning
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
David Jacobs
AAML
25
40
0
08 Jun 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
45
27
0
24 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Poisons that are learned faster are more effective
Pedro Sandoval-Segura
Vasu Singla
Liam H. Fowl
Jonas Geiping
Micah Goldblum
David Jacobs
Tom Goldstein
6
17
0
19 Apr 2022
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
Shaopeng Fu
Fengxiang He
Yang Liu
Li Shen
Dacheng Tao
19
24
0
28 Mar 2022
Deep Learning Serves Traffic Safety Analysis: A Forward-looking Review
Abolfazl Razi
Xiwen Chen
Huayu Li
Hao Wang
Brendan J. Russo
Yan Chen
Hongbin Yu
27
39
0
07 Mar 2022
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations
Tao Guo
Song Guo
Jiewei Zhang
Wenchao Xu
Junxiao Wang
MU
27
17
0
27 Feb 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
Weiqi Peng
Jinghui Chen
AAML
16
5
0
03 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
81
16
0
31 Jan 2022
Certifying Model Accuracy under Distribution Shifts
Aounon Kumar
Alexander Levine
Tom Goldstein
S. Feizi
OOD
27
7
0
28 Jan 2022
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Harrison Foley
Liam H. Fowl
Tom Goldstein
Gavin Taylor
AAML
17
9
0
03 Jan 2022
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
194
345
0
15 Dec 2021
Amicable Aid: Perturbing Images to Improve Classification Performance
Juyeop Kim
Jun-Ho Choi
Soobeom Jang
Jong-Seok Lee
AAML
13
2
0
09 Dec 2021
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples
Zhuoran Liu
Zhengyu Zhao
A. Kolmus
Tijn Berns
Twan van Laarhoven
Tom Heskes
Martha Larson
AAML
37
6
0
25 Nov 2021
Fooling Adversarial Training with Inducing Noise
Zhirui Wang
Yifei Wang
Yisen Wang
17
14
0
19 Nov 2021
Fast Yet Effective Machine Unlearning
Ayush K Tarun
Vikram S Chundawat
Murari Mandal
Mohan S. Kankanhalli
MU
31
171
0
17 Nov 2021
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
Robust Contrastive Learning Using Negative Samples with Diminished Semantics
Songwei Ge
Shlok Kumar Mishra
Haohan Wang
Chun-Liang Li
David Jacobs
SSL
24
71
0
27 Oct 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
20
69
0
17 Sep 2021
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
23
132
0
21 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
20
10
0
12 Jun 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
31
71
0
09 Feb 2021
With False Friends Like These, Who Can Notice Mistakes?
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
13
5
0
29 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
18
270
0
18 Dec 2020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
167
224
0
08 Mar 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
196
274
0
06 Mar 2020
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
Previous
1
2
3