-learning is one of the most fundamental reinforcement learning (RL) algorithms. Despite its widespread success in various applications, it is prone to overestimation bias in the -learning update. To address this issue, double -learning employs two independent -estimators which are randomly selected and updated during the learning process. This paper proposes a modified double -learning, called simultaneous double -learning (SDQ), with its finite-time analysis. SDQ eliminates the need for random selection between the two -estimators, and this modification allows us to analyze double -learning through the lens of a novel switching system framework facilitating efficient finite-time analysis. Empirical studies demonstrate that SDQ converges faster than double -learning while retaining the ability to mitigate the maximization bias. Finally, we derive a finite-time expected error bound for SDQ.
View on arXiv