ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.13469
33
6

Span-Based Optimal Sample Complexity for Average Reward MDPs

22 November 2023
M. Zurek
Yudong Chen
ArXivPDFHTML
Abstract

We study the sample complexity of learning an ε\varepsilonε-optimal policy in an average-reward Markov decision process (MDP) under a generative model. We establish the complexity bound O~(SAHε2)\widetilde{O}\left(SA\frac{H}{\varepsilon^2} \right)O(SAε2H​), where HHH is the span of the bias function of the optimal policy and SASASA is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters S,A,HS,A,HS,A,H and ε\varepsilonε, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. Our result is based on reducing the average-reward MDP to a discounted MDP. To establish the optimality of this reduction, we develop improved bounds for γ\gammaγ-discounted MDPs, showing that O~(SAH(1−γ)2ε2)\widetilde{O}\left(SA\frac{H}{(1-\gamma)^2\varepsilon^2} \right)O(SA(1−γ)2ε2H​) samples suffice to learn a ε\varepsilonε-optimal policy in weakly communicating MDPs under the regime that γ≥1−1H\gamma \geq 1 - \frac{1}{H}γ≥1−H1​, circumventing the well-known lower bound of Ω~(SA1(1−γ)3ε2)\widetilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\varepsilon^2} \right)Ω(SA(1−γ)3ε21​) for general γ\gammaγ-discounted MDPs. Our analysis develops upper bounds on certain instance-dependent variance parameters in terms of the span parameter. These bounds are tighter than those based on the mixing time or diameter of the MDP and may be of broader use.

View on arXiv
Comments on this paper