ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.14077
16
49

Nearly Horizon-Free Offline Reinforcement Learning

25 March 2021
Tongzheng Ren
Jialian Li
Bo Dai
S. Du
Sujay Sanghavi
    OffRL
ArXivPDFHTML
Abstract

We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes (MDP). For tabular MDP with SSS states and AAA actions, or linear MDP with anchor points and feature dimension ddd, given the collected KKK episodes data with minimum visiting probability of (anchor) state-action pairs dmd_mdm​, we obtain nearly horizon HHH-free sample complexity bounds for offline reinforcement learning when the total reward is upper bounded by 111. Specifically: 1. For offline policy evaluation, we obtain an O~(1Kdm)\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} \right)O~(Kdm​1​​) error bound for the plug-in estimator, which matches the lower bound up to logarithmic factors and does not have additional dependency on poly(H,S,A,d)\mathrm{poly}\left(H, S, A, d\right)poly(H,S,A,d) in higher-order term. 2.For offline policy optimization, we obtain an O~(1Kdm+min⁡(S,d)Kdm)\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} + \frac{\min(S, d)}{Kd_m}\right)O~(Kdm​1​​+Kdm​min(S,d)​) sub-optimality gap for the empirical optimal policy, which approaches the lower bound up to logarithmic factors and a high-order term, improving upon the best known result by \cite{cui2020plug} that has additional poly(H,S,d)\mathrm{poly}\left(H, S, d\right)poly(H,S,d) factors in the main term. To the best of our knowledge, these are the \emph{first} set of nearly horizon-free bounds for episodic time-homogeneous offline tabular MDP and linear MDP with anchor points. Central to our analysis is a simple yet effective recursion based method to bound a "total variance" term in the offline scenarios, which could be of individual interest.

View on arXiv
Comments on this paper