ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.05983
16
12

On the Reduction of Variance and Overestimation of Deep Q-Learning

14 October 2019
Mohammed Sabry
A. A. A. Khalifa
    OOD
    OffRL
ArXivPDFHTML
Abstract

The breakthrough of deep Q-Learning on different types of environments revolutionized the algorithmic design of Reinforcement Learning to introduce more stable and robust algorithms, to that end many extensions to deep Q-Learning algorithm have been proposed to reduce the variance of the target values and the overestimation phenomena. In this paper, we examine new methodology to solve these issues, we propose using Dropout techniques on deep Q-Learning algorithm as a way to reduce variance and overestimation. We also present experiments conducted on benchmark environments, demonstrating the effectiveness of our methodology in enhancing stability and reducing both variance and overestimation in model performance.

View on arXiv
Comments on this paper