17
0

REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing

Abstract

Large language model editing methods frequently suffer from overfitting, wherein factual updates can propagate beyond their intended scope, overemphasizing the edited target even when it's contextually inappropriate. To address this challenge, we introduce REACT (Representation Extraction And Controllable Tuning), a unified two-phase framework designed for precise and controllable knowledge editing. In the initial phase, we utilize tailored stimuli to extract latent factual representations and apply Principal Component Analysis with a simple learnbale linear transformation to compute a directional "belief shift" vector for each instance. In the second phase, we apply controllable perturbations to hidden states using the obtained vector with a magnitude scalar, gated by a pre-trained classifier that permits edits only when contextually necessary. Relevant experiments on EVOKE benchmarks demonstrate that REACT significantly reduces overfitting across nearly all evaluation metrics, and experiments on COUNTERFACT and MQuAKE shows that our method preserves balanced basic editing performance (reliability, locality, and generality) under diverse editing scenarios.

View on arXiv
@article{zhong2025_2505.18933,
  title={ REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing },
  author={ Haitian Zhong and Yuhuan Liu and Ziyang Xu and Guofan Liu and Qiang Liu and Shu Wu and Zhe Zhao and Liang Wang and Tieniu Tan },
  journal={arXiv preprint arXiv:2505.18933},
  year={ 2025 }
}
Comments on this paper