Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
- MU

Machine unlearning offers a promising solution to privacy and safety concerns in large language models (LLMs) by selectively removing targeted knowledge while preserving utility. However, current methods are highly sensitive to downstream fine-tuning, which can quickly recover forgotten information-even from unrelated tasks. To address this, we introduce invariance into unlearning for the first time, inspired by invariant risk minimization (IRM). Building on this principle, we propose invariant LLM unlearning (ILU), a regularization-based framework that enhances robustness. Notably, ILU generalizes well to diverse fine-tuning tasks, even when trained using a single dataset. A task vector analysis is also provided to further elucidate the rationale behind ILU's effectiveness. Extensive experiments on the WMDP and MUSE benchmark, reveal that ILU significantly outperforms state-of-the-art unlearning methods, including negative preference optimization (NPO) and representation misdirection for unlearning (RMU). Notably, ILU achieves superior unlearning robustness across diverse downstream fine-tuning scenarios (e.g., math, paraphrase detection, and sentiment analysis) while preserving the fine-tuning performance.
View on arXiv@article{wang2025_2506.01339, title={ Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning }, author={ Changsheng Wang and Yihua Zhang and Jinghan Jia and Parikshit Ram and Dennis Wei and Yuguang Yao and Soumyadeep Pal and Nathalie Baracaldo and Sijia Liu }, journal={arXiv preprint arXiv:2506.01339}, year={ 2025 } }