80
1

Does Localization Inform Unlearning? A Rigorous Examination of Local Parameter Attribution for Knowledge Unlearning in Language Models

Main:3 Pages
2 Figures
Bibliography:3 Pages
2 Tables
Appendix:5 Pages
Abstract

Large language models often retain unintended content, prompting growing interest in knowledge unlearning. Recent approaches emphasize localized unlearning, which restricts parameter updates to specific regions in an effort to remove target knowledge while preserving unrelated general knowledge. However, their effectiveness remains uncertain due to the lack of robust and thorough evaluation of the trade-off between the competing goals of unlearning. In this paper, we begin by revisiting existing localized unlearning approaches. We then conduct controlled experiments to rigorously evaluate whether local parameter updates causally contribute to unlearning. Our findings reveal that the set of parameters that must be modified for effective unlearning is not strictly determined, challenging the core assumption of localized unlearning that parameter locality is inherently indicative of effective knowledge removal.

View on arXiv
@article{lee2025_2505.16252,
  title={ Does Localization Inform Unlearning? A Rigorous Examination of Local Parameter Attribution for Knowledge Unlearning in Language Models },
  author={ Hwiyeong Lee and Uiji Hwang and Hyelim Lim and Taeuk Kim },
  journal={arXiv preprint arXiv:2505.16252},
  year={ 2025 }
}
Comments on this paper