ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11425
17
58

Finite-Sample Analysis of Nonlinear Stochastic Approximation with Applications in Reinforcement Learning

27 May 2019
Zaiwei Chen
Sheng Zhang
Thinh T. Doan
John-Paul Clarke
S. T. Maguluri
ArXivPDFHTML
Abstract

Motivated by applications in reinforcement learning (RL), we study a nonlinear stochastic approximation (SA) algorithm under Markovian noise, and establish its finite-sample convergence bounds under various stepsizes. Specifically, we show that when using constant stepsize (i.e., αk≡α\alpha_k\equiv \alphaαk​≡α), the algorithm achieves exponential fast convergence to a neighborhood (with radius O(αlog⁡(1/α))O(\alpha\log(1/\alpha))O(αlog(1/α))) around the desired limit point. When using diminishing stepsizes with appropriate decay rate, the algorithm converges with rate O(log⁡(k)/k)O(\log(k)/k)O(log(k)/k). Our proof is based on Lyapunov drift arguments, and to handle the Markovian noise, we exploit the fast mixing of the underlying Markov chain. To demonstrate the generality of our theoretical results on Markovian SA, we use it to derive the finite-sample bounds of the popular QQQ-learning with linear function approximation algorithm, under a condition on the behavior policy. Importantly, we do not need to make the assumption that the samples are i.i.d., and do not require an artificial projection step in the algorithm to maintain the boundedness of the iterates. Numerical simulations corroborate our theoretical results.

View on arXiv
Comments on this paper