ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.09464
6
55

Counterfactual Credit Assignment in Model-Free Reinforcement Learning

18 November 2020
Thomas Mesnard
T. Weber
Fabio Viola
S. Thakoor
Alaa Saade
Anna Harutyunyan
Will Dabney
T. Stepleton
N. Heess
A. Guez
Éric Moulines
Marcus Hutter
Lars Buesing
Rémi Munos
    CML
    OffRL
ArXivPDFHTML
Abstract

Credit assignment in reinforcement learning is the problem of measuring an action's influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent's actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.

View on arXiv
Comments on this paper