ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.07900
22
8

Policy Gradient in Partially Observable Environments: Approximation and Convergence

18 October 2018
Kamyar Azizzadenesheli
Manish Kumar Bera
Anima Anandkumar
    OffRL
ArXivPDFHTML
Abstract

Policy gradient is a generic and flexible reinforcement learning approach that generally enjoys simplicity in analysis, implementation, and deployment. In the last few decades, this approach has been extensively advanced for fully observable environments. In this paper, we generalize a variety of these advances to partially observable settings, and similar to the fully observable case, we keep our focus on the class of Markovian policies. We propose a series of technical tools, including a novel notion of advantage function, to develop policy gradient algorithms and study their convergence properties in such environments. Deploying these tools, we generalize a variety of existing theoretical guarantees, such as policy gradient and convergence theorems, to partially observable domains, those which also could be carried to more settings of interest. This study also sheds light on the understanding of policy gradient approaches in real-world applications which tend to be partially observable.

View on arXiv
Comments on this paper