2

Learning the Mechanism of Catastrophic Forgetting: A Perspective from Gradient Similarity

Mutian Yang
Zisen Zhan
Yutong Chen
Haolin Li
Kaiwen Wang
Kaili Zheng
Yuguang Wang
Qi Wang
Jiandong Gao
Ji Wu
Main:9 Pages
3 Figures
Bibliography:4 Pages
10 Tables
Appendix:8 Pages
Abstract

Catastrophic forgetting during knowledge injection severely undermines the continual learning capability of large language models (LLMs). Although existing methods attempt to mitigate this issue, they often lack a foundational theoretical explanation. We establish a gradient-based theoretical framework to explain catastrophic forgetting. We first prove that strongly negative gradient similarity is a fundamental cause of forgetting. We then use gradient similarity to identify two types of neurons: conflicting neurons that induce forgetting and account for 50%-75% of neurons, and collaborative neurons that mitigate forgetting and account for 25%-50%. Based on this analysis, we propose a knowledge injection method, Collaborative Neural Learning (CNL). By freezing conflicting neurons and updating only collaborative neurons, CNL theoretically eliminates catastrophic forgetting under an infinitesimal learning rate eta and an exactly known mastered set. Experiments on five LLMs, four datasets, and four optimizers show that CNL achieves zero forgetting in in-set settings and reduces forgetting by 59.1%-81.7% in out-of-set settings.

View on arXiv
Comments on this paper