ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.13884
  4. Cited By
Constant Stepsize Q-learning: Distributional Convergence, Bias and
  Extrapolation

Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation

25 January 2024
Yixuan Zhang
Qiaomin Xie
ArXivPDFHTML

Papers citing "Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation"

5 / 5 papers shown
Title
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
Yixuan Zhang
Dongyan
Yudong Chen
Qiaomin Xie
24
0
0
11 Apr 2025
Two-Timescale Linear Stochastic Approximation: Constant Stepsizes Go a
  Long Way
Two-Timescale Linear Stochastic Approximation: Constant Stepsizes Go a Long Way
Jeongyeol Kwon
Luke Dotson
Yudong Chen
Qiaomin Xie
28
1
0
16 Oct 2024
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Marina Sheshukova
Denis Belomestny
Alain Durmus
Eric Moulines
Alexey Naumov
S. Samsonov
33
1
0
07 Oct 2024
Computing the Bias of Constant-step Stochastic Approximation with
  Markovian Noise
Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise
Sebastian Allmeier
Nicolas Gast
38
5
0
23 May 2024
Stochastic Gradient Descent with Dependent Data for Offline
  Reinforcement Learning
Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning
Jing-rong Dong
Xin T. Tong
OffRL
25
2
0
06 Feb 2022
1