Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.08586
Cited By
A Contraction Approach to Model-based Reinforcement Learning
18 September 2020
Ting-Han Fan
Peter J. Ramadge
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Contraction Approach to Model-based Reinforcement Learning"
21 / 21 papers shown
Title
MOPO: Model-based Offline Policy Optimization
Tianhe Yu
G. Thomas
Lantao Yu
Stefano Ermon
James Zou
Sergey Levine
Chelsea Finn
Tengyu Ma
OffRL
65
759
0
27 May 2020
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation
Yaqi Duan
Mengdi Wang
OffRL
129
151
0
21 Feb 2020
Learning to Combat Compounding-Error in Model-Based Reinforcement Learning
Chenjun Xiao
Yifan Wu
Chen Ma
Dale Schuurmans
Martin Müller
OffRL
39
43
0
24 Dec 2019
Provably Efficient Reinforcement Learning with Linear Function Approximation
Chi Jin
Zhuoran Yang
Zhaoran Wang
Michael I. Jordan
76
549
0
11 Jul 2019
Benchmarking Model-Based Reinforcement Learning
Tingwu Wang
Xuchan Bao
I. Clavera
Jerrick Hoang
Yeming Wen
Eric D. Langlois
Matthew Shunshi Zhang
Guodong Zhang
Pieter Abbeel
Jimmy Ba
OffRL
57
361
0
03 Jul 2019
When to Trust Your Model: Model-Based Policy Optimization
Michael Janner
Justin Fu
Marvin Zhang
Sergey Levine
OffRL
55
939
0
19 Jun 2019
Model-Based Reinforcement Learning via Meta-Policy Optimization
I. Clavera
Jonas Rothfuss
John Schulman
Yasuhiro Fujita
Tamim Asfour
Pieter Abbeel
62
225
0
14 Sep 2018
Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees
Yuping Luo
Huazhe Xu
Yuanzhi Li
Yuandong Tian
Trevor Darrell
Tengyu Ma
OffRL
90
225
0
10 Jul 2018
Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion
Jacob Buckman
Danijar Hafner
George Tucker
E. Brevdo
Honglak Lee
54
330
0
04 Jul 2018
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models
Kurtland Chua
Roberto Calandra
R. McAllister
Sergey Levine
BDL
166
1,263
0
30 May 2018
Model-Ensemble Trust-Region Policy Optimization
Thanard Kurutach
I. Clavera
Yan Duan
Aviv Tamar
Pieter Abbeel
50
450
0
28 Feb 2018
Addressing Function Approximation Error in Actor-Critic Methods
Scott Fujimoto
H. V. Hoof
David Meger
OffRL
153
5,121
0
26 Feb 2018
Clipped Action Policy Gradient
Yasuhiro Fujita
S. Maeda
OffRL
43
37
0
21 Feb 2018
Spectral Normalization for Generative Adversarial Networks
Takeru Miyato
Toshiki Kataoka
Masanori Koyama
Yuichi Yoshida
ODL
137
4,421
0
16 Feb 2018
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Tuomas Haarnoja
Aurick Zhou
Pieter Abbeel
Sergey Levine
225
8,236
0
04 Jan 2018
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
Anusha Nagabandi
G. Kahn
R. Fearing
Sergey Levine
76
967
0
08 Aug 2017
Improved Training of Wasserstein GANs
Ishaan Gulrajani
Faruk Ahmed
Martín Arjovsky
Vincent Dumoulin
Aaron Courville
GAN
139
9,509
0
31 Mar 2017
Wasserstein GAN
Martín Arjovsky
Soumith Chintala
Léon Bottou
GAN
140
4,822
0
26 Jan 2017
Generative Adversarial Imitation Learning
Jonathan Ho
Stefano Ermon
GAN
114
3,089
0
10 Jun 2016
Continuous Deep Q-Learning with Model-based Acceleration
S. Gu
Timothy Lillicrap
Ilya Sutskever
Sergey Levine
62
1,010
0
02 Mar 2016
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
Stéphane Ross
Geoffrey J. Gordon
J. Andrew Bagnell
OffRL
166
3,196
0
02 Nov 2010
1