ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.02224
  4. Cited By
Neural Distillation as a State Representation Bottleneck in
  Reinforcement Learning

Neural Distillation as a State Representation Bottleneck in Reinforcement Learning

5 October 2022
Valentin Guillet
D. Wilson
Carlos Aguilar-Melchor
Emmanuel Rachelson
ArXivPDFHTML

Papers citing "Neural Distillation as a State Representation Bottleneck in Reinforcement Learning"

2 / 2 papers shown
Title
Improving Generalization in Reinforcement Learning with Mixture
  Regularization
Improving Generalization in Reinforcement Learning with Mixture Regularization
Kaixin Wang
Bingyi Kang
Jie Shao
Jiashi Feng
109
117
0
21 Oct 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
362
11,700
0
09 Mar 2017
1