ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03432
71
36

First-Order Regret in Reinforcement Learning with Linear Function Approximation: A Robust Estimation Approach

7 December 2021
Andrew Wagenmaker
Yifang Chen
Max Simchowitz
S. Du
Kevin G. Jamieson
ArXivPDFHTML
Abstract

Obtaining first-order regret bounds -- regret bounds scaling not as the worst-case but with some measure of the performance of the optimal policy on a given instance -- is a core question in sequential decision-making. While such bounds exist in many settings, they have proven elusive in reinforcement learning with large state spaces. In this work we address this gap, and show that it is possible to obtain regret scaling as O~(d3H3⋅V1⋆⋅K+d3.5H3log⁡K)\widetilde{\mathcal{O}}(\sqrt{d^3 H^3 \cdot V_1^\star \cdot K} + d^{3.5}H^3\log K )O(d3H3⋅V1⋆​⋅K​+d3.5H3logK) in reinforcement learning with large state spaces, namely the linear MDP setting. Here V1⋆V_1^\starV1⋆​ is the value of the optimal policy and KKK is the number of episodes. We demonstrate that existing techniques based on least squares estimation are insufficient to obtain this result, and instead develop a novel robust self-normalized concentration bound based on the robust Catoni mean estimator, which may be of independent interest.

View on arXiv
Comments on this paper