Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features

Abstract
Linear TD() is one of the most fundamental reinforcement learning algorithms for policy evaluation. Previously, convergence rates are typically established under the assumption of linearly independent features, which does not hold in many practical scenarios. This paper instead establishes the first convergence rates for linear TD() operating under arbitrary features, without making any algorithmic modification or additional assumptions. Our results apply to both the discounted and average-reward settings. To address the potential non-uniqueness of solutions resulting from arbitrary features, we develop a novel stochastic approximation result featuring convergence rates to the solution set instead of a single point.
View on arXiv@article{xie2025_2505.21391, title={ Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features }, author={ Zixuan Xie and Xinyu Liu and Rohan Chandra and Shangtong Zhang }, journal={arXiv preprint arXiv:2505.21391}, year={ 2025 } }
Comments on this paper