ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.05804
  4. Cited By
Near-optimal Offline Reinforcement Learning with Linear Representation:
  Leveraging Variance Information with Pessimism

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism

11 March 2022
Ming Yin
Yaqi Duan
Mengdi Wang
Yu Wang
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism"

2 / 52 papers shown
Title
Variance-Aware Regret Bounds for Undiscounted Reinforcement Learning in
  MDPs
Variance-Aware Regret Bounds for Undiscounted Reinforcement Learning in MDPs
M. S. Talebi
Odalric-Ambrym Maillard
59
73
0
05 Mar 2018
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable
Nan Jiang
A. Krishnamurthy
Alekh Agarwal
John Langford
Robert Schapire
161
421
0
29 Oct 2016
Previous
12