ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.05053
  4. Cited By
Sample Complexity Bounds for Two Timescale Value-based Reinforcement
  Learning Algorithms

Sample Complexity Bounds for Two Timescale Value-based Reinforcement Learning Algorithms

10 November 2020
Tengyu Xu
Yingbin Liang
ArXivPDFHTML

Papers citing "Sample Complexity Bounds for Two Timescale Value-based Reinforcement Learning Algorithms"

10 / 10 papers shown
Title
Regularized Q-Learning with Linear Function Approximation
Regularized Q-Learning with Linear Function Approximation
Jiachen Xi
Alfredo Garcia
P. Momcilovic
38
2
0
26 Jan 2024
Central Limit Theorem for Two-Timescale Stochastic Approximation with
  Markovian Noise: Theory and Applications
Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications
Jie Hu
Vishwaraj Doshi
Do Young Eun
38
4
0
17 Jan 2024
Tight Finite Time Bounds of Two-Time-Scale Linear Stochastic Approximation with Markovian Noise
Tight Finite Time Bounds of Two-Time-Scale Linear Stochastic Approximation with Markovian Noise
Shaan ul Haque
S. Khodadadian
S. T. Maguluri
44
11
0
31 Dec 2023
High-probability sample complexities for policy evaluation with linear
  function approximation
High-probability sample complexities for policy evaluation with linear function approximation
Gen Li
Weichen Wu
Yuejie Chi
Cong Ma
Alessandro Rinaldo
Yuting Wei
OffRL
30
7
0
30 May 2023
Gauss-Newton Temporal Difference Learning with Nonlinear Function
  Approximation
Gauss-Newton Temporal Difference Learning with Nonlinear Function Approximation
Zhifa Ke
Junyu Zhang
Zaiwen Wen
24
0
0
25 Feb 2023
Finite-Time Error Bounds for Greedy-GQ
Finite-Time Error Bounds for Greedy-GQ
Yue Wang
Yi Zhou
Shaofeng Zou
34
1
0
06 Sep 2022
Target Network and Truncation Overcome The Deadly Triad in $Q$-Learning
Target Network and Truncation Overcome The Deadly Triad in QQQ-Learning
Zaiwei Chen
John-Paul Clarke
S. T. Maguluri
23
19
0
05 Mar 2022
PER-ETD: A Polynomially Efficient Emphatic Temporal Difference Learning
  Method
PER-ETD: A Polynomially Efficient Emphatic Temporal Difference Learning Method
Ziwei Guan
Tengyu Xu
Yingbin Liang
15
4
0
13 Oct 2021
Online Robust Reinforcement Learning with Model Uncertainty
Online Robust Reinforcement Learning with Model Uncertainty
Yue Wang
Shaofeng Zou
OOD
OffRL
76
97
0
29 Sep 2021
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms
  with Finite-Time Analysis
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
Ziyi Chen
Yi Zhou
Rongrong Chen
Shaofeng Zou
19
24
0
08 Sep 2021
1