ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17490
33
0

Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

24 April 2025
Mingqi Yuan
Qi Wang
Guozheng Ma
Bo-wen Li
Xin Jin
Yunbo Wang
Xiaokang Yang
Wenjun Zeng
D. Tao
    OffRL
    AI4CE
ArXivPDFHTML
Abstract

Developing lifelong learning agents is crucial for artificial general intelligence. However, deep reinforcement learning (RL) systems often suffer from plasticity loss, where neural networks gradually lose their ability to adapt during training. Despite its significance, this field lacks unified benchmarks and evaluation protocols. We introduce Plasticine, the first open-source framework for benchmarking plasticity optimization in deep RL. Plasticine provides single-file implementations of over 13 mitigation methods, 10 evaluation metrics, and learning scenarios with increasing non-stationarity levels from standard to open-ended environments. This framework enables researchers to systematically quantify plasticity loss, evaluate mitigation strategies, and analyze plasticity dynamics across different contexts. Our documentation, examples, and source code are available atthis https URL.

View on arXiv
@article{yuan2025_2504.17490,
  title={ Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning },
  author={ Mingqi Yuan and Qi Wang and Guozheng Ma and Bo Li and Xin Jin and Yunbo Wang and Xiaokang Yang and Wenjun Zeng and Dacheng Tao },
  journal={arXiv preprint arXiv:2504.17490},
  year={ 2025 }
}
Comments on this paper