ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.13609
16
0

A nearly Blackwell-optimal policy gradient method

28 May 2021
Vektor Dewanto
M. Gallagher
    OffRL
ArXivPDFHTML
Abstract

For continuing environments, reinforcement learning (RL) methods commonly maximize the discounted reward criterion with discount factor close to 1 in order to approximate the average reward (the gain). However, such a criterion only considers the long-run steady-state performance, ignoring the transient behaviour in transient states. In this work, we develop a policy gradient method that optimizes the gain, then the bias (which indicates the transient performance and is important to capably select from policies with equal gain). We derive expressions that enable sampling for the gradient of the bias and its preconditioning Fisher matrix. We further devise an algorithm that solves the gain-then-bias (bi-level) optimization. Its key ingredient is an RL-specific logarithmic barrier function. Experimental results provide insights into the fundamental mechanisms of our proposal.

View on arXiv
Comments on this paper