Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.11381
Cited By
Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes
22 February 2023
Emmeran Johnson
Ciara Pike-Burke
Patrick Rebeschini
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes"
9 / 9 papers shown
Title
Functional Acceleration for Policy Mirror Descent
Veronica Chelu
Doina Precup
60
0
0
23 Jul 2024
A general sample complexity analysis of vanilla policy gradient
Rui Yuan
Robert Mansel Gower
A. Lazaric
90
64
0
23 Jul 2021
On the Linear convergence of Natural Policy Gradient Algorithm
S. Khodadadian
P. Jhunjhunwala
Sushil Mahavir Varma
S. T. Maguluri
70
57
0
04 May 2021
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
Gen Li
Yuting Wei
Yuejie Chi
Yuxin Chen
91
128
0
26 May 2020
Adaptive Trust Region Policy Optimization: Global Convergence and Faster Rates for Regularized MDPs
Lior Shani
Yonathan Efroni
Shie Mannor
47
175
0
06 Sep 2019
Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal
Alekh Agarwal
Sham Kakade
Lin F. Yang
OffRL
81
170
0
10 Jun 2019
Global Optimality Guarantees For Policy Gradient Methods
Jalaj Bhandari
Daniel Russo
65
193
0
05 Jun 2019
A Theory of Regularized Markov Decision Processes
Matthieu Geist
B. Scherrer
Olivier Pietquin
107
325
0
31 Jan 2019
Improved and Generalized Upper Bounds on the Complexity of Policy Iteration
B. Scherrer
78
76
0
03 Jun 2013
1