ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.05873
40
16

A Reduction from Reinforcement Learning to No-Regret Online Learning

14 November 2019
Ching-An Cheng
Rémi Tachet des Combes
Byron Boots
Geoffrey J. Gordon
    OffRL
ArXivPDFHTML
Abstract

We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any γ\gammaγ-discounted tabular RL problem, with probability at least 1−δ1-\delta1−δ, it learns an ϵ\epsilonϵ-optimal policy using at most O~(∣S∣∣A∣log⁡(1δ)(1−γ)4ϵ2)\tilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\log(\frac{1}{\delta})}{(1-\gamma)^4\epsilon^2}\right)O~((1−γ)4ϵ2∣S∣∣A∣log(δ1​)​) samples. Furthermore, this algorithm admits a direct extension to linearly parameterized function approximators for large-scale applications, with computation and sample complexities independent of ∣S∣|\mathcal{S}|∣S∣,∣A∣|\mathcal{A}|∣A∣, though at the cost of potential approximation bias.

View on arXiv
Comments on this paper