ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.03041
15
114

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

4 June 2020
Gen Li
Yuting Wei
Yuejie Chi
Yuantao Gu
Yuxin Chen
    OffRL
ArXivPDFHTML
Abstract

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a γ\gammaγ-discounted MDP with state space S\mathcal{S}S and action space A\mathcal{A}A, we demonstrate that the ℓ∞\ell_{\infty}ℓ∞​-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise ε\varepsilonε-accurate estimate of the Q-function --- is at most on the order of 1μmin⁡(1−γ)5ε2+tmixμmin⁡(1−γ)\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}μmin​(1−γ)5ε21​+μmin​(1−γ)tmix​​ up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, tmixt_{mix}tmix​ and μmin⁡\mu_{\min}μmin​ denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the sample complexity in the synchronous case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the cost taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result \cite{qu2020finite} by a factor of at least ∣S∣∣A∣|\mathcal{S}||\mathcal{A}|∣S∣∣A∣ for all scenarios, and by a factor of at least tmix∣S∣∣A∣t_{mix}|\mathcal{S}||\mathcal{A}|tmix​∣S∣∣A∣ for any sufficiently small accuracy level ε\varepsilonε. Further, we demonstrate that the scaling on the effective horizon 11−γ\frac{1}{1-\gamma}1−γ1​ can be improved by means of variance reduction.

View on arXiv
Comments on this paper