ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.11566
17
93

Logarithmic Regret for Reinforcement Learning with Linear Function Approximation

23 November 2020
Jiafan He
Dongruo Zhou
Quanquan Gu
ArXivPDFHTML
Abstract

Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining T\sqrt{T}T​-type regret bound, where TTT is the number of interactions with the MDP. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. More specifically, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve O~(d3H5/gapmin⋅log⁡(T))\tilde{O}(d^{3}H^5/\text{gap}_{\text{min}}\cdot \log(T))O~(d3H5/gapmin​⋅log(T)) regret; and under the linear mixture MDP assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve O~(d2H5/gapmin⋅log⁡3(T))\tilde{O}(d^{2}H^5/\text{gap}_{\text{min}}\cdot \log^3(T))O~(d2H5/gapmin​⋅log3(T)) regret, where ddd is the dimension of feature mapping, HHH is the length of episode, gapmin\text{gap}_{\text{min}}gapmin​ is the minimal sub-optimality gap, and O~\tilde OO~ hides all logarithmic terms except log⁡(T)\log(T)log(T). To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation. We also establish gap-dependent lower bounds for the two linear MDP models.

View on arXiv
Comments on this paper