ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.00479
13
30

Fast Rates for the Regret of Offline Reinforcement Learning

31 January 2021
Yichun Hu
Nathan Kallus
Masatoshi Uehara
    OffRL
ArXivPDFHTML
Abstract

We study the regret of reinforcement learning from offline data generated by a fixed behavior policy in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted QQQ-iteration (FQI), suggest a O(1/n)O(1/\sqrt{n})O(1/n​) convergence for regret, empirical behavior exhibits \emph{much} faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function Q∗Q^*Q∗, the regret of the policy it defines converges at a rate given by the exponentiation of the Q∗Q^*Q∗-estimate's pointwise convergence rate, thus speeding it up. The level of exponentiation depends on the level of noise in the \emph{decision-making} problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply O(1/n)O(1/n)O(1/n) regret rates in linear cases and exp⁡(−Ω(n))\exp(-\Omega(n))exp(−Ω(n)) regret rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on LpL_pLp​-convergence rates for estimating Q∗Q^*Q∗ rather than pointwise rates, where L2L_2L2​ guarantees for nonparametric Q∗Q^*Q∗-estimation can be ensured under mild conditions.

View on arXiv
Comments on this paper