ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.09508
35
0

CollabEdit: Towards Non-destructive Collaborative Knowledge Editing

12 October 2024
Jiamu Zheng
Jinghuai Zhang
Tianyu Du
Xuhong Zhang
Jianwei Yin
Tao Lin
    KELM
ArXivPDFHTML
Abstract

Collaborative learning of large language models (LLMs) has emerged as a new paradigm for utilizing private data from different parties to guarantee efficiency and privacy. Meanwhile, Knowledge Editing (KE) for LLMs has also garnered increased attention due to its ability to manipulate the behaviors of LLMs explicitly, yet leaves the collaborative KE case (in which knowledge edits of multiple parties are aggregated in a privacy-preserving and continual manner) unexamined. To this end, this manuscript dives into the first investigation of collaborative KE, in which we start by carefully identifying the unique three challenges therein, including knowledge overlap, knowledge conflict, and knowledge forgetting. We then propose a non-destructive collaborative KE framework, COLLABEDIT, which employs a novel model merging mechanism to mimic the global KE behavior while preventing the severe performance drop. Extensive experiments on two canonical datasets demonstrate the superiority of COLLABEDIT compared to other destructive baselines, and results shed light on addressing three collaborative KE challenges and future applications. Our code is available atthis https URL.

View on arXiv
@article{zheng2025_2410.09508,
  title={ CollabEdit: Towards Non-destructive Collaborative Knowledge Editing },
  author={ Jiamu Zheng and Jinghuai Zhang and Tianyu Du and Xuhong Zhang and Jianwei Yin and Tao Lin },
  journal={arXiv preprint arXiv:2410.09508},
  year={ 2025 }
}
Comments on this paper