ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12384
15
0

Model Merging for Knowledge Editing

14 June 2025
Zichuan Fu
Xian Wu
Guojing Li
Yingying Zhang
Yefeng Zheng
Tianshi Ming
Y. X. R. Wang
Wanyu Wang
Xiangyu Zhao
    KELMMoMeCLL
ArXiv (abs)PDFHTML
Main:5 Pages
3 Figures
Bibliography:3 Pages
9 Tables
Appendix:3 Pages
Abstract

Large Language Models (LLMs) require continuous updates to maintain accurate and current knowledge as the world evolves. While existing knowledge editing approaches offer various solutions for knowledge updating, they often struggle with sequential editing scenarios and harm the general capabilities of the model, thereby significantly hampering their practical applicability. This paper proposes a two-stage framework combining robust supervised fine-tuning (R-SFT) with model merging for knowledge editing. Our method first fine-tunes the LLM to internalize new knowledge fully, then merges the fine-tuned model with the original foundation model to preserve newly acquired knowledge and general capabilities. Experimental results demonstrate that our approach significantly outperforms existing methods in sequential editing while better preserving the original performance of the model, all without requiring any architectural changes. Code is available at:this https URL.

View on arXiv
@article{fu2025_2506.12384,
  title={ Model Merging for Knowledge Editing },
  author={ Zichuan Fu and Xian Wu and Guojing Li and Yingying Zhang and Yefeng Zheng and Tianshi Ming and Yejing Wang and Wanyu Wang and Xiangyu Zhao },
  journal={arXiv preprint arXiv:2506.12384},
  year={ 2025 }
}
Comments on this paper