ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.05275
  4. Cited By
Settling the Sample Complexity of Model-Based Offline Reinforcement
  Learning

Settling the Sample Complexity of Model-Based Offline Reinforcement Learning

11 April 2022
Gen Li
Laixi Shi
Yuxin Chen
Yuejie Chi
Yuting Wei
    OffRL
ArXivPDFHTML

Papers citing "Settling the Sample Complexity of Model-Based Offline Reinforcement Learning"

4 / 54 papers shown
Title
Pessimism for Offline Linear Contextual Bandits using $\ell_p$
  Confidence Sets
Pessimism for Offline Linear Contextual Bandits using ℓp\ell_pℓp​ Confidence Sets
Gen Li
Cong Ma
Nathan Srebro
OffRL
36
12
0
21 May 2022
The Efficacy of Pessimism in Asynchronous Q-Learning
The Efficacy of Pessimism in Asynchronous Q-Learning
Yuling Yan
Gen Li
Yuxin Chen
Jianqing Fan
OffRL
78
40
0
14 Mar 2022
Testing Stationarity and Change Point Detection in Reinforcement Learning
Testing Stationarity and Change Point Detection in Reinforcement Learning
Mengbing Li
C. Shi
Zhanghua Wu
Piotr Fryzlewicz
OffRL
42
9
0
03 Mar 2022
Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
  Optimal Sample Complexity
Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
Laixi Shi
Gen Li
Yuting Wei
Yuxin Chen
Yuejie Chi
OffRL
41
90
0
28 Feb 2022
Previous
12