93
0
v1v2 (latest)

Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing

Main:7 Pages
12 Figures
Bibliography:3 Pages
9 Tables
Appendix:10 Pages
Abstract

Recently, knowledge editing (KE) has emerged as a promising approach to update specific facts in Large Language Models (LLMs) without the need for full retraining. Despite the effectiveness in general-domain benchmarks, their applicability to complex medical domain remains largely unexplored. Medical knowledge editing is particularly challenging, as it requires LLMs to internalize the knowledge and generalize to unseen scenarios for effective and interpretable decision-making. In this work, we propose a novel framework called MedEditBench to rigorously evaluate the effectiveness of existing KE methods in the medical domain. In MedEditBench, we introduce a new medical knowledge editing benchmark as well as three different knowledge editing paradigms, which are designed to assess the impact of different knowledge sources for editing. Our findings indicate that current KE methods result in only superficial memorization of the injected information, failing to generalize to new scenarios. To overcome this limitation, we present Self-Generated Rationale Editing (SGR-Edit), which utilizes model-derived rationales as the target knowledge for editing, thereby uncovering the underlying reasoning process and demonstrating significant improvements over existing KE approaches. Additionally, we offer deeper insights into medical knowledge editing, including the localization of medical knowledge in LLMs and the impact of sequential editing on evolving knowledge. This could provide practical guidance for implementing KE methods in real-world medical applications.

View on arXiv
@article{chen2025_2506.03490,
  title={ Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing },
  author={ Shigeng Chen and Linhao Luo and Zhangchi Qiu and Yanan Cao and Carl Yang and Shirui Pan },
  journal={arXiv preprint arXiv:2506.03490},
  year={ 2025 }
}
Comments on this paper