ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12002
  4. Cited By
Optimal policy evaluation using kernel-based temporal difference methods

Optimal policy evaluation using kernel-based temporal difference methods

24 September 2021
Yaqi Duan
Mengdi Wang
Martin J. Wainwright
    OffRL
ArXivPDFHTML

Papers citing "Optimal policy evaluation using kernel-based temporal difference methods"

19 / 19 papers shown
Title
Hybrid Transfer Reinforcement Learning: Provable Sample Efficiency from
  Shifted-Dynamics Data
Hybrid Transfer Reinforcement Learning: Provable Sample Efficiency from Shifted-Dynamics Data
Chengrui Qu
Laixi Shi
Kishan Panaganti
Pengcheng You
Adam Wierman
OffRL
OnRL
38
0
0
06 Nov 2024
Is Offline Decision Making Possible with Only Few Samples? Reliable
  Decisions in Data-Starved Bandits via Trust Region Enhancement
Is Offline Decision Making Possible with Only Few Samples? Reliable Decisions in Data-Starved Bandits via Trust Region Enhancement
Ruiqi Zhang
Yuexiang Zhai
Andrea Zanette
48
0
0
24 Feb 2024
Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement
  Learning
Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning
Ahmadreza Moradipari
M. Pedramfar
Modjtaba Shokrian Zini
Vaneet Aggarwal
29
5
0
30 Oct 2023
The Optimal Approximation Factors in Misspecified Off-Policy Value
  Function Estimation
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
P. Amortila
Nan Jiang
Csaba Szepesvári
OffRL
29
3
0
25 Jul 2023
High-probability sample complexities for policy evaluation with linear
  function approximation
High-probability sample complexities for policy evaluation with linear function approximation
Gen Li
Weichen Wu
Yuejie Chi
Cong Ma
Alessandro Rinaldo
Yuting Wei
OffRL
27
6
0
30 May 2023
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function
  Approximation
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Thanh Nguyen-Tang
R. Arora
OffRL
46
5
0
24 Feb 2023
Kernel-based off-policy estimation without overlap: Instance optimality
  beyond semiparametric efficiency
Kernel-based off-policy estimation without overlap: Instance optimality beyond semiparametric efficiency
Wenlong Mou
Peng Ding
Martin J. Wainwright
Peter L. Bartlett
OffRL
24
10
0
16 Jan 2023
Inference on Time Series Nonparametric Conditional Moment Restrictions
  Using General Sieves
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves
Xiaohong Chen
Yuan Liao
Weichen Wang
22
0
0
31 Dec 2022
On Instance-Dependent Bounds for Offline Reinforcement Learning with
  Linear Function Approximation
On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Thanh Nguyen-Tang
Ming Yin
Sunil R. Gupta
Svetha Venkatesh
R. Arora
OffRL
58
16
0
23 Nov 2022
Krylov-Bellman boosting: Super-linear policy evaluation in general state
  spaces
Krylov-Bellman boosting: Super-linear policy evaluation in general state spaces
Eric Xia
Martin J. Wainwright
OffRL
16
2
0
20 Oct 2022
A Complete Characterization of Linear Estimators for Offline Policy
  Evaluation
A Complete Characterization of Linear Estimators for Offline Policy Evaluation
Juan C. Perdomo
A. Krishnamurthy
Peter L. Bartlett
Sham Kakade
OffRL
27
3
0
08 Mar 2022
Off-Policy Fitted Q-Evaluation with Differentiable Function
  Approximators: Z-Estimation and Inference Theory
Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory
Ruiqi Zhang
Xuezhou Zhang
Chengzhuo Ni
Mengdi Wang
OffRL
35
16
0
10 Feb 2022
Optimal Estimation of Off-Policy Policy Gradient via Double Fitted
  Iteration
Optimal Estimation of Off-Policy Policy Gradient via Double Fitted Iteration
Chengzhuo Ni
Ruiqi Zhang
Xiang Ji
Xuezhou Zhang
Mengdi Wang
OffRL
21
1
0
31 Jan 2022
Instance-Dependent Confidence and Early Stopping for Reinforcement
  Learning
Instance-Dependent Confidence and Early Stopping for Reinforcement Learning
K. Khamaru
Eric Xia
Martin J. Wainwright
Michael I. Jordan
34
5
0
21 Jan 2022
On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function
  Estimation in Off-policy Evaluation
On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation
Xiaohong Chen
Zhengling Qi
OffRL
33
31
0
17 Jan 2022
Accelerated and instance-optimal policy evaluation with linear function
  approximation
Accelerated and instance-optimal policy evaluation with linear function approximation
Tianjiao Li
Guanghui Lan
A. Pananjady
OffRL
37
13
0
24 Dec 2021
Perturbational Complexity by Distribution Mismatch: A Systematic
  Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space
Perturbational Complexity by Distribution Mismatch: A Systematic Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space
Jihao Long
Jiequn Han
29
6
0
05 Nov 2021
Sample Complexity of Offline Reinforcement Learning with Deep ReLU
  Networks
Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks
Thanh Nguyen-Tang
Sunil R. Gupta
Hung The Tran
Svetha Venkatesh
OffRL
65
7
0
11 Mar 2021
Double Reinforcement Learning for Efficient Off-Policy Evaluation in
  Markov Decision Processes
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Nathan Kallus
Masatoshi Uehara
OffRL
38
181
0
22 Aug 2019
1