ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.00525
  4. Cited By
Catastrophic Interference in Reinforcement Learning: A Solution Based on
  Context Division and Knowledge Distillation

Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation

1 September 2021
Tiantian Zhang
Xueqian Wang
Bin Liang
Bo Yuan
    OffRL
ArXivPDFHTML

Papers citing "Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation"

3 / 3 papers shown
Title
Solving Continual Offline RL through Selective Weights Activation on
  Aligned Spaces
Solving Continual Offline RL through Selective Weights Activation on Aligned Spaces
Jifeng Hu
Sili Huang
Li Shen
Zhejian Yang
Shengchao Hu
Shisong Tang
Hechang Chen
Yi Chang
Dacheng Tao
Lichao Sun
OffRL
41
0
0
21 Oct 2024
Leveraging Knowledge Distillation for Efficient Deep Reinforcement
  Learning in Resource-Constrained Environments
Leveraging Knowledge Distillation for Efficient Deep Reinforcement Learning in Resource-Constrained Environments
Guanlin Meng
28
1
0
16 Oct 2023
Matching DNN Compression and Cooperative Training with Resources and
  Data Availability
Matching DNN Compression and Cooperative Training with Resources and Data Availability
F. Malandrino
G. Giacomo
Armin Karamzade
Marco Levorato
C. Chiasserini
45
9
0
02 Dec 2022
1