24
0

When Forgetting Triggers Backdoors: A Clean Unlearning Attack

Main:8 Pages
7 Figures
Bibliography:2 Pages
5 Tables
Abstract

Machine unlearning has emerged as a key component in ensuring ``Right to be Forgotten'', enabling the removal of specific data points from trained models. However, even when the unlearning is performed without poisoning the forget-set (clean unlearning), it can be exploited for stealthy attacks that existing defenses struggle to detect. In this paper, we propose a novel {\em clean} backdoor attack that exploits both the model learning phase and the subsequent unlearning requests. Unlike traditional backdoor methods, during the first phase, our approach injects a weak, distributed malicious signal across multiple classes. The real attack is then activated and amplified by selectively unlearning {\em non-poisoned} samples. This strategy results in a powerful and stealthy novel attack that is hard to detect or mitigate, highlighting critical vulnerabilities in current unlearning mechanisms and highlighting the need for more robust defenses.

View on arXiv
@article{arazzi2025_2506.12522,
  title={ When Forgetting Triggers Backdoors: A Clean Unlearning Attack },
  author={ Marco Arazzi and Antonino Nocera and Vinod P },
  journal={arXiv preprint arXiv:2506.12522},
  year={ 2025 }
}
Comments on this paper