ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.04499
27
11

Towards Practical Credit Assignment for Deep Reinforcement Learning

8 June 2021
Vyacheslav Alipov
Riley Simmons-Edler
N.Yu. Putintsev
Pavel Kalinin
Dmitry Vetrov
    OffRL
ArXivPDFHTML
Abstract

Credit assignment is a fundamental problem in reinforcement learning, the problem of measuring an action's influence on future rewards. Explicit credit assignment methods have the potential to boost the performance of RL algorithms on many tasks, but thus far remain impractical for general use. Recently, a family of methods called Hindsight Credit Assignment (HCA) was proposed, which explicitly assign credit to actions in hindsight based on the probability of the action having led to an observed outcome. This approach has appealing properties, but remains a largely theoretical idea applicable to a limited set of tabular RL tasks. Moreover, it is unclear how to extend HCA to deep RL environments. In this work, we explore the use of HCA-style credit in a deep RL context. We first describe the limitations of existing HCA algorithms in deep RL that lead to their poor performance or complete lack of training, then propose several theoretically-justified modifications to overcome them. We explore the quantitative and qualitative effects of the resulting algorithm on the Arcade Learning Environment (ALE) benchmark, and observe that it improves performance over Advantage Actor-Critic (A2C) on many games where non-trivial credit assignment is necessary to achieve high scores and where hindsight probabilities can be accurately estimated.

View on arXiv
Comments on this paper