ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.12786
  4. Cited By
Bellman Residual Orthogonalization for Offline Reinforcement Learning

Bellman Residual Orthogonalization for Offline Reinforcement Learning

24 March 2022
Andrea Zanette
Martin J. Wainwright
    OffRL
ArXivPDFHTML

Papers citing "Bellman Residual Orthogonalization for Offline Reinforcement Learning"

5 / 5 papers shown
Title
Policy Finetuning in Reinforcement Learning via Design of Experiments
  using Offline Data
Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data
Ruiqi Zhang
Andrea Zanette
OffRL
OnRL
40
7
0
10 Jul 2023
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage
Masatoshi Uehara
Nathan Kallus
Jason D. Lee
Wen Sun
OffRL
42
5
0
05 Feb 2023
When is Realizability Sufficient for Off-Policy Reinforcement Learning?
When is Realizability Sufficient for Off-Policy Reinforcement Learning?
Andrea Zanette
OffRL
16
14
0
10 Nov 2022
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear
  RL
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL
Jinglin Chen
Aditya Modi
A. Krishnamurthy
Nan Jiang
Alekh Agarwal
38
25
0
21 Jun 2022
Pessimistic Model-based Offline Reinforcement Learning under Partial
  Coverage
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
Masatoshi Uehara
Wen Sun
OffRL
96
144
0
13 Jul 2021
1