Existing Large Language Model Unlearning Evaluations Are Inconclusive
- MUELM

Machine unlearning aims to remove sensitive or undesired data from large language models. However, recent studies suggest that unlearning is often shallow, claiming that removed knowledge can easily be recovered. In this work, we critically examine standard unlearning evaluation practices and uncover key limitations that shake our trust in those findings. First, we show that some evaluations introduce substantial new information into the model, potentially masking true unlearning performance by re-teaching the model during testing. Second, we demonstrate that evaluation outcomes vary significantly across tasks, undermining the generalizability of current evaluation routines. Finally, we find that many evaluations rely on spurious correlations, making their results difficult to trust and interpret. Taken together, these issues suggest that current evaluation protocols may both overstate and understate unlearning success. To address this, we propose two principles for future unlearning evaluations: minimal information injection and downstream task awareness. We validate these principles through a series of targeted experiments, showing how violations of each can lead to misleading conclusions.
View on arXiv@article{feng2025_2506.00688, title={ Existing Large Language Model Unlearning Evaluations Are Inconclusive }, author={ Zhili Feng and Yixuan Even Xu and Alexander Robey and Robert Kirk and Xander Davies and Yarin Gal and Avi Schwarzschild and J. Zico Kolter }, journal={arXiv preprint arXiv:2506.00688}, year={ 2025 } }