ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.11684
38
7

Approximation Benefits of Policy Gradient Methods with Aggregated States

22 July 2020
Daniel Russo
ArXivPDFHTML
Abstract

Folklore suggests that policy gradient can be more robust to misspecification than its relative, approximate policy iteration. This paper studies the case of state-aggregated representations, where the state space is partitioned and either the policy or value function approximation is held constant over partitions. This paper shows a policy gradient method converges to a policy whose regret per-period is bounded by ϵ\epsilonϵ, the largest difference between two elements of the state-action value function belonging to a common partition. With the same representation, both approximate policy iteration and approximate value iteration can produce policies whose per-period regret scales as ϵ/(1−γ)\epsilon/(1-\gamma)ϵ/(1−γ), where γ\gammaγ is a discount factor. Faced with inherent approximation error, methods that locally optimize the true decision-objective can be far more robust.

View on arXiv
Comments on this paper