ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.13239
  4. Cited By
MOPO: Model-based Offline Policy Optimization

MOPO: Model-based Offline Policy Optimization

27 May 2020
Tianhe Yu
G. Thomas
Lantao Yu
Stefano Ermon
James Zou
Sergey Levine
Chelsea Finn
Tengyu Ma
    OffRL
ArXivPDFHTML

Papers citing "MOPO: Model-based Offline Policy Optimization"

7 / 207 papers shown
Title
The Importance of Pessimism in Fixed-Dataset Policy Optimization
The Importance of Pessimism in Fixed-Dataset Policy Optimization
Jacob Buckman
Carles Gelada
Marc G. Bellemare
OffRL
42
136
0
15 Sep 2020
Learning Off-Policy with Online Planning
Learning Off-Policy with Online Planning
Harshit S. Sikchi
Wenxuan Zhou
David Held
OffRL
37
46
0
23 Aug 2020
QPLEX: Duplex Dueling Multi-Agent Q-Learning
QPLEX: Duplex Dueling Multi-Agent Q-Learning
Jianhao Wang
Zhizhou Ren
Terry Liu
Yang Yu
Chongjie Zhang
OffRL
51
437
0
03 Aug 2020
Model-based Reinforcement Learning for Semi-Markov Decision Processes
  with Neural ODEs
Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs
Jianzhun Du
Joseph D. Futoma
Finale Doshi-Velez
30
49
0
29 Jun 2020
Deep Dynamics Models for Learning Dexterous Manipulation
Deep Dynamics Models for Learning Dexterous Manipulation
Anusha Nagabandi
K. Konolige
Sergey Levine
Vikash Kumar
157
408
0
25 Sep 2019
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,683
0
05 Dec 2016
Off-Policy Actor-Critic
Off-Policy Actor-Critic
T. Degris
Martha White
R. Sutton
OffRL
CML
163
220
0
22 May 2012
Previous
12345