ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08396
  4. Cited By
Keep Doing What Worked: Behavioral Modelling Priors for Offline
  Reinforcement Learning

Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning

19 February 2020
Noah Y. Siegel
Jost Tobias Springenberg
Felix Berkenkamp
A. Abdolmaleki
Michael Neunert
Thomas Lampe
Roland Hafner
Nicolas Heess
Martin Riedmiller
    OffRL
ArXivPDFHTML

Papers citing "Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning"

9 / 9 papers shown
Title
Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning
Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning
Abdullah Akgul
Manuel Haußmann
M. Kandemir
OffRL
127
0
0
17 Jan 2025
OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment
OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment
Yooseok Lim
Sujee Lee
OffRL
195
0
0
03 Jan 2025
Geometric-Averaged Preference Optimization for Soft Preference Labels
Geometric-Averaged Preference Optimization for Soft Preference Labels
Hiroki Furuta
Kuang-Huei Lee
Shixiang Shane Gu
Y. Matsuo
Aleksandra Faust
Heiga Zen
Izzeddin Gur
76
9
0
31 Dec 2024
Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning
Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning
Subhojyoti Mukherjee
Josiah P. Hanna
Qiaomin Xie
Robert Nowak
143
2
0
07 Jun 2024
Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Phillip Swazinna
Steffen Udluft
Thomas Runkler
OffRL
43
83
0
12 Aug 2020
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Aviral Kumar
Justin Fu
George Tucker
Sergey Levine
OffRL
OnRL
79
1,044
0
03 Jun 2019
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Dhruva Tirumala
Hyeonwoo Noh
Alexandre Galashov
Leonard Hasenclever
Arun Ahuja
Greg Wayne
Razvan Pascanu
Yee Whye Teh
N. Heess
OffRL
33
45
0
18 Mar 2019
Learning by Playing - Solving Sparse Reward Tasks from Scratch
Learning by Playing - Solving Sparse Reward Tasks from Scratch
Martin Riedmiller
Roland Hafner
Thomas Lampe
Michael Neunert
Jonas Degrave
T. Wiele
Volodymyr Mnih
N. Heess
Jost Tobias Springenberg
68
446
0
28 Feb 2018
DeepMind Control Suite
DeepMind Control Suite
Yuval Tassa
Yotam Doron
Alistair Muldal
Tom Erez
Yazhe Li
...
A. Abdolmaleki
J. Merel
Andrew Lefrancq
Timothy Lillicrap
Martin Riedmiller
ELM
LM&Ro
BDL
108
1,121
0
02 Jan 2018
1