ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.02787
17
5

Balanced Q-learning: Combining the Influence of Optimistic and Pessimistic Targets

3 November 2021
Thommen George Karimpanal
Hung Le
Majid Abdolshah
Santu Rana
Sunil R. Gupta
T. Tran
Svetha Venkatesh
ArXivPDFHTML
Abstract

The optimistic nature of the Q-learning target leads to an overestimation bias, which is an inherent problem associated with standard Q−Q-Q−learning. Such a bias fails to account for the possibility of low returns, particularly in risky scenarios. However, the existence of biases, whether overestimation or underestimation, need not necessarily be undesirable. In this paper, we analytically examine the utility of biased learning, and show that specific types of biases may be preferable, depending on the scenario. Based on this finding, we design a novel reinforcement learning algorithm, Balanced Q-learning, in which the target is modified to be a convex combination of a pessimistic and an optimistic term, whose associated weights are determined online, analytically. We prove the convergence of this algorithm in a tabular setting, and empirically demonstrate its superior learning performance in various environments.

View on arXiv
Comments on this paper