3
0

Revealing the Deceptiveness of Knowledge Editing: A Mechanistic Analysis of Superficial Editing

Abstract

Knowledge editing, which aims to update the knowledge encoded in language models, can be deceptive. Despite the fact that many existing knowledge editing algorithms achieve near-perfect performance on conventional metrics, the models edited by them are still prone to generating original knowledge. This paper introduces the concept of "superficial editing" to describe this phenomenon. Our comprehensive evaluation reveals that this issue presents a significant challenge to existing algorithms. Through systematic investigation, we identify and validate two key factors contributing to this issue: (1) the residual stream at the last subject position in earlier layers and (2) specific attention modules in later layers. Notably, certain attention heads in later layers, along with specific left singular vectors in their output matrices, encapsulate the original knowledge and exhibit a causal relationship with superficial editing. Furthermore, we extend our analysis to the task of superficial unlearning, where we observe consistent patterns in the behavior of specific attention heads and their corresponding left singular vectors, thereby demonstrating the robustness and broader applicability of our methodology and conclusions. Our code is available here.

View on arXiv
@article{xie2025_2505.12636,
  title={ Revealing the Deceptiveness of Knowledge Editing: A Mechanistic Analysis of Superficial Editing },
  author={ Jiakuan Xie and Pengfei Cao and Yubo Chen and Kang Liu and Jun Zhao },
  journal={arXiv preprint arXiv:2505.12636},
  year={ 2025 }
}
Comments on this paper