52
45

On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence

Abstract

We provide non-asymptotic bounds for the well-known temporal difference learning algorithm TD(0) with linear function approximators. These include high-probability bounds as well as bounds in expectation. Our analysis suggests that a step-size inversely proportional to the number of iterations cannot guarantee optimal rate of convergence unless we assume knowledge of the mixing rate for the Markov chain underlying the policy considered. This problem is alleviated by employing the well-known Polyak-Ruppert averaging scheme, leading to optimal rate of convergence without any knowledge of the mixing rate. Furthermore, we propose a variant of TD(0) with linear approximators that incorporates a centering sequence, and we establish that it exhibits an exponential rate of convergence in expectation.

View on arXiv
Comments on this paper