ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.15755
  4. Cited By
Confident Approximate Policy Iteration for Efficient Local Planning in
  $q^π$-realizable MDPs

Confident Approximate Policy Iteration for Efficient Local Planning in qπq^πqπ-realizable MDPs

27 October 2022
Gellert Weisz
András Gyorgy
Tadashi Kozuno
Csaba Szepesvári
ArXivPDFHTML

Papers citing "Confident Approximate Policy Iteration for Efficient Local Planning in $q^π$-realizable MDPs"

5 / 5 papers shown
Title
Offline RL via Feature-Occupancy Gradient Ascent
Offline RL via Feature-Occupancy Gradient Ascent
Gergely Neu
Nneka Okolo
OffRL
43
0
0
22 May 2024
Regularization and Variance-Weighted Regression Achieves Minimax
  Optimality in Linear MDPs: Theory and Practice
Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice
Toshinori Kitamura
Tadashi Kozuno
Yunhao Tang
Nino Vieillard
Michal Valko
...
Olivier Pietquin
M. Geist
Csaba Szepesvári
Wataru Kumagai
Yutaka Matsuo
OffRL
35
3
0
22 May 2023
Exponential Hardness of Reinforcement Learning with Linear Function
  Approximation
Exponential Hardness of Reinforcement Learning with Linear Function Approximation
Daniel M. Kane
Sihan Liu
Shachar Lovett
G. Mahajan
Csaba Szepesvári
Gellert Weisz
51
3
0
25 Feb 2023
Sample Efficient Deep Reinforcement Learning via Local Planning
Sample Efficient Deep Reinforcement Learning via Local Planning
Dong Yin
S. Thiagarajan
N. Lazić
Nived Rajaraman
Botao Hao
Csaba Szepesvári
30
4
0
29 Jan 2023
Approximation Benefits of Policy Gradient Methods with Aggregated States
Approximation Benefits of Policy Gradient Methods with Aggregated States
Daniel Russo
45
7
0
22 Jul 2020
1