ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.04898
  4. Cited By
Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples: Making Personal Data Unexploitable

13 January 2021
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
    MIACV
ArXivPDFHTML

Papers citing "Unlearnable Examples: Making Personal Data Unexploitable"

50 / 137 papers shown
Title
Efficient Availability Attacks against Supervised and Contrastive
  Learning Simultaneously
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously
Yihan Wang
Yifan Zhu
Xiao-Shan Gao
AAML
23
6
0
06 Feb 2024
Data Poisoning for In-context Learning
Data Poisoning for In-context Learning
Pengfei He
Han Xu
Yue Xing
Hui Liu
Makoto Yamada
Jiliang Tang
AAML
SILM
22
10
0
03 Feb 2024
Unlearnable Examples For Time Series
Unlearnable Examples For Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
James Bailey
AI4TS
23
1
0
03 Feb 2024
Game-Theoretic Unlearnable Example Generator
Game-Theoretic Unlearnable Example Generator
Shuang Liu
Yihan Wang
Xiao-Shan Gao
AAML
29
8
0
31 Jan 2024
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward
  Machine Unlearning through Parameter Re-initialization
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization
Yoonhwa Jung
Ikhyun Cho
Shun-Hsiang Hsu
J. Hockenmaier
AAML
MU
17
4
0
17 Jan 2024
Data-Dependent Stability Analysis of Adversarial Training
Data-Dependent Stability Analysis of Adversarial Training
Yihan Wang
Shuang Liu
Xiao-Shan Gao
36
3
0
06 Jan 2024
A Dataset and Benchmark for Copyright Infringement Unlearning from
  Text-to-Image Diffusion Models
A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models
Rui-ya Ma
Qiang Zhou
Yizhu Jin
Daquan Zhou
Bangjun Xiao
...
Jingtong Hu
Xiaodong Xie
Zhen Dong
Shanghang Zhang
Shiji Zhou
6
2
0
04 Jan 2024
PosCUDA: Position based Convolution for Unlearnable Audio Datasets
PosCUDA: Position based Convolution for Unlearnable Audio Datasets
V. Gokul
Shlomo Dubnov
SSL
26
3
0
04 Jan 2024
DeRDaVa: Deletion-Robust Data Valuation for Machine Learning
DeRDaVa: Deletion-Robust Data Valuation for Machine Learning
Xiao Tian
Rachael Hwee Ling Sim
Jue Fan
K. H. Low
TDI
17
2
0
18 Dec 2023
Detection and Defense of Unlearnable Examples
Detection and Defense of Unlearnable Examples
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
19
7
0
14 Dec 2023
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image
  Transformations
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Xianlong Wang
Shengshan Hu
Minghui Li
Zhifei Yu
Ziqi Zhou
Leo Yu Zhang
AAML
26
0
0
30 Nov 2023
Trainwreck: A damaging adversarial attack on image classifiers
Trainwreck: A damaging adversarial attack on image classifiers
Jan Zahálka
23
1
0
24 Nov 2023
MetaCloak: Preventing Unauthorized Subject-driven Text-to-image
  Diffusion-based Synthesis via Meta-learning
MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning
Yixin Liu
Chenrui Fan
Yutong Dai
Xun Chen
Pan Zhou
Lichao Sun
DiffM
26
19
0
22 Nov 2023
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
  Examples via Stable Error-Minimizing Noise
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
Yixin Liu
Kaidi Xu
Xun Chen
Lichao Sun
21
7
0
22 Nov 2023
BrainWash: A Poisoning Attack to Forget in Continual Learning
BrainWash: A Poisoning Attack to Forget in Continual Learning
Ali Abbasi
Parsa Nooralinejad
Hamed Pirsiavash
Soheil Kolouri
CLL
KELM
AAML
32
5
0
20 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
17
2
0
18 Nov 2023
Making Harmful Behaviors Unlearnable for Large Language Models
Making Harmful Behaviors Unlearnable for Large Language Models
Xin Zhou
Yi Lu
Ruotian Ma
Tao Gui
Qi Zhang
Xuanjing Huang
MU
41
9
0
02 Nov 2023
Protecting Publicly Available Data With Machine Learning Shortcuts
Protecting Publicly Available Data With Machine Learning Shortcuts
Nicolas M. Muller
Maximilian Burgert
Pascal Debus
Jennifer Williams
Philip Sperl
Konstantin Böttinger
18
0
0
30 Oct 2023
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
  a Minimax Game Perspective
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective
Yifei Wang
Liangchen Li
Jiansheng Yang
Zhouchen Lin
Yisen Wang
28
11
0
30 Oct 2023
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
  Against Unauthorized Data Usage in Diffusion-Based Generative AI
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Bochuan Cao
Changjiang Li
Ting Wang
Jinyuan Jia
Bo Li
Jinghui Chen
DiffM
28
21
0
30 Oct 2023
Segue: Side-information Guided Generative Unlearnable Examples for
  Facial Privacy Protection in Real World
Segue: Side-information Guided Generative Unlearnable Examples for Facial Privacy Protection in Real World
Zhiling Zhang
Jie Zhang
Kui Zhang
Wenbo Zhou
Weiming Zhang
Neng H. Yu
20
1
0
24 Oct 2023
GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured
  Data from Unauthorized Exploitation
GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation
Yixin Liu
Chenrui Fan
Xun Chen
Pan Zhou
Lichao Sun
56
4
0
11 Oct 2023
Domain Watermark: Effective and Harmless Dataset Copyright Protection is
  Closed at Hand
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
30
50
0
09 Oct 2023
Transferable Availability Poisoning Attacks
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
19
3
0
08 Oct 2023
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning
  System
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
Peixin Zhang
Jun Sun
Mingtian Tan
Xinyu Wang
AAML
32
4
0
12 Sep 2023
APBench: A Unified Benchmark for Availability Poisoning Attacks and
  Defenses
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengjie Xu
AAML
18
6
0
07 Aug 2023
What do neural networks learn in image classification? A frequency
  shortcut perspective
What do neural networks learn in image classification? A frequency shortcut perspective
Shunxin Wang
Raymond N. J. Veldhuis
Christoph Brune
N. Strisciuglio
16
21
0
19 Jul 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for
  Linear Learners?
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Fnu Suya
X. Zhang
Yuan Tian
David E. Evans
OOD
AAML
26
2
0
03 Jul 2023
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal
  Data
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data
Xinzhe Li
Ming Liu
Shang Gao
MU
25
8
0
02 Jul 2023
Exploring Model Dynamics for Accumulative Poisoning Discovery
Exploring Model Dynamics for Accumulative Poisoning Discovery
Jianing Zhu
Xiawei Guo
Jiangchao Yao
Chao Du
Li He
Shuo Yuan
Tongliang Liu
Liang Wang
Bo Han
AAML
16
0
0
06 Jun 2023
Unlearnable Examples for Diffusion Models: Protect Data from
  Unauthorized Exploitation
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Zhengyue Zhao
Jinhao Duan
Xingui Hu
Kaidi Xu
Chenan Wang
Rui Zhang
Zidong Du
Qi Guo
Yunji Chen
DiffM
WIGM
28
27
0
02 Jun 2023
What Can We Learn from Unlearnable Datasets?
What Can We Learn from Unlearnable Datasets?
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
19
14
0
30 May 2023
Sharpness-Aware Data Poisoning Attack
Sharpness-Aware Data Poisoning Attack
Pengfei He
Han Xu
J. Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Jiliang Tang
AAML
41
7
0
24 May 2023
Towards Generalizable Data Protection With Transferable Unlearnable
  Examples
Towards Generalizable Data Protection With Transferable Unlearnable Examples
Bin Fang
Bo-wen Li
Shuang Wu
Tianyi Zheng
Shouhong Ding
Ran Yi
Lizhuang Ma
13
4
0
18 May 2023
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo-wen Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Unlearnable Examples Give a False Sense of Security: Piercing through
  Unexploitable Data with Learnable Examples
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He-Nan Wang
Jianxin Sun
Hao Wu
Richang Hong
37
18
0
16 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through
  Poisoning Attacks
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
19
2
0
30 Apr 2023
LAVA: Data Valuation without Pre-Specified Learning Algorithms
LAVA: Data Valuation without Pre-Specified Learning Algorithms
H. Just
Feiyang Kang
Jiachen T. Wang
Yi Zeng
Myeongseob Ko
Ming Jin
R. Jia
27
54
0
28 Apr 2023
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable
  Example Attacks
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengzhong Xu
AAML
MU
37
27
0
27 Mar 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data
  using Diffusion Models
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
46
17
0
15 Mar 2023
Backdoor Defense via Deconfounded Representation Learning
Backdoor Defense via Deconfounded Representation Learning
Zaixin Zhang
Qi Liu
Zhicai Wang
Zepu Lu
Qingyong Hu
AAML
57
39
0
13 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
23
0
07 Mar 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning
  Attacks
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
39
18
0
07 Mar 2023
Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
Yixin Liu
Chenrui Fan
Pan Zhou
Lichao Sun
6
4
0
05 Mar 2023
Securing Biomedical Images from Unauthorized Training with Anti-Learning
  Perturbation
Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation
Yixin Liu
Haohui Ye
Kai Zhang
Lichao Sun
MedIm
17
3
0
05 Mar 2023
Audit to Forget: A Unified Method to Revoke Patients' Private Data in
  Intelligent Healthcare
Audit to Forget: A Unified Method to Revoke Patients' Private Data in Intelligent Healthcare
Juexiao Zhou
Haoyang Li
Xingyu Liao
Bin Zhang
Wenjia He
Zhongxiao Li
Longxi Zhou
Xin Gao
MU
25
13
0
20 Feb 2023
Raising the Cost of Malicious AI-Powered Image Editing
Raising the Cost of Malicious AI-Powered Image Editing
Hadi Salman
Alaa Khaddaj
Guillaume Leclerc
Andrew Ilyas
A. Madry
DiffM
20
108
0
13 Feb 2023
Image Shortcut Squeezing: Countering Perturbative Availability Poisons
  with Compression
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Zhuoran Liu
Zhengyu Zhao
Martha Larson
29
34
0
31 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
18
24
0
31 Dec 2022
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
Qi Tian
Kun Kuang
Ke Jiang
Furui Liu
Zhihua Wang
Fei Wu
14
7
0
04 Dec 2022
Previous
123
Next