ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11190
136
0

ReLearn: Unlearning via Learning for Large Language Models

16 February 2025
Haoming Xu
Ningyuan Zhao
Liming Yang
Sendong Zhao
Shumin Deng
Mengru Wang
Bryan Hooi
Nay Oo
H. Chen
N. Zhang
    KELM
    CLL
    MU
ArXivPDFHTML
Abstract

Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgetting while inadequately assessing response fluency and relevance. To address these challenges, we propose ReLearn, a data augmentation and fine-tuning pipeline for effective unlearning, along with a comprehensive evaluation framework. This framework introduces Knowledge Forgetting Rate (KFR) and Knowledge Retention Rate (KRR) to measure knowledge-level preservation, and Linguistic Score (LS) to evaluate generation quality. Our experiments show that ReLearn successfully achieves targeted forgetting while preserving high-quality output. Through mechanistic analysis, we further demonstrate how reverse optimization disrupts coherent text generation, while ReLearn preserves this essential capability. Code is available atthis https URL.

View on arXiv
@article{xu2025_2502.11190,
  title={ ReLearn: Unlearning via Learning for Large Language Models },
  author={ Haoming Xu and Ningyuan Zhao and Liming Yang and Sendong Zhao and Shumin Deng and Mengru Wang and Bryan Hooi and Nay Oo and Huajun Chen and Ningyu Zhang },
  journal={arXiv preprint arXiv:2502.11190},
  year={ 2025 }
}
Comments on this paper