ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.10717
  4. Cited By
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks

Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks

21 December 2022
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
    MU
ArXivPDFHTML

Papers citing "Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks"

31 / 31 papers shown
Title
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure
Jonas Henry Grebe
Tobias Braun
Marcus Rohrbach
Anna Rohrbach
AAML
85
0
0
29 Apr 2025
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
28
0
0
15 Apr 2025
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine Unlearning
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine Unlearning
Manaar Alam
Hithem Lamri
Michail Maniatakos
AAML
57
1
0
17 Feb 2025
Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification
Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification
Changchang Sun
Ren Wang
Yihua Zhang
Jinghan Jia
Jiancheng Liu
Gaowen Liu
Sijia Liu
Yan Yan
AAML
MU
93
0
0
21 Dec 2024
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Vinith M. Suriyakumar
Rohan Alur
Ayush Sekhari
Manish Raghavan
Ashia C. Wilson
55
2
0
10 Oct 2024
Releasing Malevolence from Benevolence: The Menace of Benign Data on
  Machine Unlearning
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning
Binhao Ma
Tianhang Zheng
Hongsheng Hu
Di Wang
Shuo Wang
Zhongjie Ba
Zhan Qin
Kui Ren
AAML
33
3
0
06 Jul 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
57
8
0
25 Jun 2024
MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning
MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning
Jiali Cheng
Hadi Amiri
BDL
43
3
0
21 Jun 2024
Label Smoothing Improves Machine Unlearning
Label Smoothing Improves Machine Unlearning
Zonglin Di
Zhaowei Zhu
Jinghan Jia
Jiancheng Liu
Zafar Takhirov
Bo Jiang
Yuanshun Yao
Sijia Liu
Yang Liu
32
1
0
11 Jun 2024
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
32
6
0
10 Jun 2024
Guaranteeing Data Privacy in Federated Unlearning with Dynamic User
  Participation
Guaranteeing Data Privacy in Federated Unlearning with Dynamic User Participation
Ziyao Liu
Yu Jiang
Weifeng Jiang
Jiale Guo
Jun Zhao
Kwok-Yan Lam
MU
FedML
47
6
0
03 Jun 2024
Exploring Fairness in Educational Data Mining in the Context of the
  Right to be Forgotten
Exploring Fairness in Educational Data Mining in the Context of the Right to be Forgotten
Wei Qian
Aobo Chen
Chenxu Zhao
Yangyi Li
Mengdi Huai
MU
34
0
0
27 May 2024
Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
Hanlin Gu
W. Ong
Chee Seng Chan
Lixin Fan
MU
36
7
0
23 May 2024
Machine Unlearning: A Comprehensive Survey
Machine Unlearning: A Comprehensive Survey
Weiqi Wang
Zhiyi Tian
Chenhan Zhang
Shui Yu
MU
AILaw
34
14
0
13 May 2024
Learn What You Want to Unlearn: Unlearning Inversion Attacks against
  Machine Unlearning
Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning
Hongsheng Hu
Shuo Wang
Tian Dong
Minhui Xue
AAML
35
18
0
04 Apr 2024
Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models
Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models
Shao Shen
Chenhao Zhang
Yawen Zhao
Alina Bialkowski
Tony Weitong Chen
Miao Xu
MU
38
12
0
31 Mar 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
35
28
0
20 Mar 2024
Dataset Condensation Driven Machine Unlearning
Dataset Condensation Driven Machine Unlearning
Junaid Iqbal Khan
DD
38
1
0
31 Jan 2024
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward
  Machine Unlearning through Parameter Re-initialization
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization
Yoonhwa Jung
Ikhyun Cho
Shun-Hsiang Hsu
J. Hockenmaier
AAML
MU
17
4
0
17 Jan 2024
MultiDelete for Multimodal Machine Unlearning
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng
Hadi Amiri
MU
40
7
0
18 Nov 2023
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware
  Approach
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
Yuke Hu
Jian Lou
Jiaqi Liu
Wangze Ni
Feng Lin
Zhan Qin
Kui Ren
MU
35
12
0
03 Nov 2023
A Survey on Federated Unlearning: Challenges, Methods, and Future
  Directions
A Survey on Federated Unlearning: Challenges, Methods, and Future Directions
Ziyao Liu
Yu Jiang
Jiyuan Shen
Minyi Peng
Kwok-Yan Lam
Xingliang Yuan
Xiaoning Liu
MU
31
44
0
31 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
27
0
0
18 Oct 2023
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in
  Machine Unlearning Services
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services
Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
MU
21
26
0
15 Sep 2023
Ticketed Learning-Unlearning Schemes
Ticketed Learning-Unlearning Schemes
Badih Ghazi
Pritish Kamath
Ravi Kumar
Pasin Manurangsi
Ayush Sekhari
Chiyuan Zhang
MU
38
7
0
27 Jun 2023
Exploring the Landscape of Machine Unlearning: A Comprehensive Survey
  and Taxonomy
Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy
T. Shaik
Xiaohui Tao
Haoran Xie
Lin Li
Xiaofeng Zhu
Qingyuan Li
MU
36
25
0
10 May 2023
Model Sparsity Can Simplify Machine Unlearning
Model Sparsity Can Simplify Machine Unlearning
Jinghan Jia
Jiancheng Liu
Parikshit Ram
Yuguang Yao
Gaowen Liu
Yang Liu
Pranay Sharma
Sijia Liu
MU
24
106
0
11 Apr 2023
Choosing Public Datasets for Private Machine Learning via Gradient
  Subspace Distance
Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance
Xin Gu
Gautam Kamath
Zhiwei Steven Wu
25
12
0
02 Mar 2023
Uncovering Adversarial Risks of Test-Time Adaptation
Uncovering Adversarial Risks of Test-Time Adaptation
Tong Wu
Feiran Jia
Xiangyu Qi
Jiachen T. Wang
Vikash Sehwag
Saeed Mahloujifar
Prateek Mittal
AAML
TTA
29
9
0
29 Jan 2023
Mixed-Privacy Forgetting in Deep Networks
Mixed-Privacy Forgetting in Deep Networks
Aditya Golatkar
Alessandro Achille
Avinash Ravichandran
M. Polito
Stefano Soatto
CLL
MU
130
159
0
24 Dec 2020
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,198
0
01 Sep 2014
1