ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.06440
29
342

Equivalence Between Policy Gradients and Soft Q-Learning

21 April 2017
John Schulman
Xi Chen
Pieter Abbeel
    OffRL
ArXivPDFHTML
Abstract

Two of the leading approaches for model-free reinforcement learning are policy gradient methods and QQQ-learning methods. QQQ-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the QQQ-values they estimate are very inaccurate. A partial explanation may be that QQQ-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between QQQ-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) QQQ-learning is exactly equivalent to a policy gradient method. We also point out a connection between QQQ-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of QQQ-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a QQQ-learning method that closely matches the learning dynamics of A3C without using a target network or ϵ\epsilonϵ-greedy exploration schedule.

View on arXiv
Comments on this paper