ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01069
33
15

A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms

2 October 2020
Shangtong Zhang
Romain Laroche
H. V. Seijen
Shimon Whiteson
Rémi Tachet des Combes
ArXivPDFHTML
Abstract

We investigate the discounting mismatch in actor-critic algorithm implementations from a representation learning perspective. Theoretically, actor-critic algorithms usually have discounting for both actor and critic, i.e., there is a γt\gamma^tγt term in the actor update for the transition observed at time ttt in a trajectory and the critic is a discounted value function. Practitioners, however, usually ignore the discounting (γt\gamma^tγt) for the actor while using a discounted critic. We investigate this mismatch in two scenarios. In the first scenario, we consider optimizing an undiscounted objective (γ=1)(\gamma = 1)(γ=1) where γt\gamma^tγt disappears naturally (1t=1)(1^t = 1)(1t=1). We then propose to interpret the discounting in critic in terms of a bias-variance-representation trade-off and provide supporting empirical results. In the second scenario, we consider optimizing a discounted objective (γ<1\gamma < 1γ<1) and propose to interpret the omission of the discounting in the actor update from an auxiliary task perspective and provide supporting empirical results.

View on arXiv
Comments on this paper