ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20018
44
0

Experience Replay Addresses Loss of Plasticity in Continual Learning

25 March 2025
Jiuqi Wang
Rohan Chandra
Shangtong Zhang
    CLL
    KELM
ArXivPDFHTML
Abstract

Loss of plasticity is one of the main challenges in continual learning with deep neural networks, where neural networks trained via backpropagation gradually lose their ability to adapt to new tasks and perform significantly worse than their freshly initialized counterparts. The main contribution of this paper is to propose a new hypothesis that experience replay addresses the loss of plasticity in continual learning. Here, experience replay is a form of memory. We provide supporting evidence for this hypothesis. In particular, we demonstrate in multiple different tasks, including regression, classification, and policy evaluation, that by simply adding an experience replay and processing the data in the experience replay with Transformers, the loss of plasticity disappears. Notably, we do not alter any standard components of deep learning. For example, we do not change backpropagation. We do not modify the activation functions. And we do not use any regularization. We conjecture that experience replay and Transformers can address the loss of plasticity because of the in-context learning phenomenon.

View on arXiv
@article{wang2025_2503.20018,
  title={ Experience Replay Addresses Loss of Plasticity in Continual Learning },
  author={ Jiuqi Wang and Rohan Chandra and Shangtong Zhang },
  journal={arXiv preprint arXiv:2503.20018},
  year={ 2025 }
}
Comments on this paper