ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11222
  4. Cited By
Is High Variance Unavoidable in RL? A Case Study in Continuous Control

Is High Variance Unavoidable in RL? A Case Study in Continuous Control

21 October 2021
Johan Bjorck
Carla P. Gomes
Kilian Q. Weinberger
ArXivPDFHTML

Papers citing "Is High Variance Unavoidable in RL? A Case Study in Continuous Control"

4 / 4 papers shown
Title
KETCHUP: K-Step Return Estimation for Sequential Knowledge Distillation
KETCHUP: K-Step Return Estimation for Sequential Knowledge Distillation
Jiabin Fan
Guoqing Luo
Michael Bowling
Lili Mou
OffRL
63
0
0
26 Apr 2025
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Marcel Hussing
C. Voelcker
Igor Gilitschenski
Amir-massoud Farahmand
Eric Eaton
34
3
0
09 Mar 2024
Efficient Deep Reinforcement Learning Requires Regulating Overfitting
Efficient Deep Reinforcement Learning Requires Regulating Overfitting
Qiyang Li
Aviral Kumar
Ilya Kostrikov
Sergey Levine
OffRL
24
31
0
20 Apr 2023
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
96
180
0
16 May 2022
1