ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.01854
26
11

Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space

2 June 2023
Anas Barakat
Ilyas Fatkhullin
Niao He
ArXivPDFHTML
Abstract

We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves O~(ϵ−3)\tilde{\mathcal{O}}(\epsilon^{-3})O~(ϵ−3) and O~(ϵ−2)\tilde{\mathcal{O}}(\epsilon^{-2})O~(ϵ−2) sample complexities for ϵ\epsilonϵ-first-order stationarity and ϵ\epsilonϵ-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a O~(ϵ−4)\tilde{\mathcal{O}}(\epsilon^{-4})O~(ϵ−4) sample complexity for a simple policy gradient method with a linear regression subroutine.

View on arXiv
Comments on this paper