ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22156
53
0

InComeS: Integrating Compression and Selection Mechanisms into LLMs for Efficient Model Editing

28 May 2025
Shuaiyi Li
Zhisong Zhang
Yang Deng
Chenlong Deng
Tianqing Fang
Hongming Zhang
Haitao Mi
Dong Yu
Wai Lam
    KELM
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:4 Pages
8 Tables
Appendix:4 Pages
Abstract

Although existing model editing methods perform well in recalling exact edit facts, they often struggle in complex scenarios that require deeper semantic understanding rather than mere knowledge regurgitation. Leveraging the strong contextual reasoning abilities of large language models (LLMs), in-context learning (ICL) becomes a promising editing method by comprehending edit information through context encoding. However, this method is constrained by the limited context window of LLMs, leading to degraded performance and efficiency as the number of edits increases. To overcome this limitation, we propose InComeS, a flexible framework that enhances LLMs' ability to process editing contexts through explicit compression and selection mechanisms. Specifically, InComeS compresses each editing context into the key-value (KV) cache of a special gist token, enabling efficient handling of multiple edits without being restricted by the model's context window. Furthermore, specialized cross-attention modules are added to dynamically select the most relevant information from the gist pools, enabling adaptive and effective utilization of edit information. We conduct experiments on diverse model editing benchmarks with various editing formats, and the results demonstrate the effectiveness and efficiency of our method.

View on arXiv
@article{li2025_2505.22156,
  title={ InComeS: Integrating Compression and Selection Mechanisms into LLMs for Efficient Model Editing },
  author={ Shuaiyi Li and Zhisong Zhang and Yang Deng and Chenlong Deng and Tianqing Fang and Hongming Zhang and Haitao Mi and Dong Yu and Wai Lam },
  journal={arXiv preprint arXiv:2505.22156},
  year={ 2025 }
}
Comments on this paper