Temporal Difference Learning (TD(0)) is fundamental in reinforcement learning, yet its finite-sample behavior under non-i.i.d. data and nonlinear approximation remains unknown. We provide the first high-probability, finite-sample analysis of vanilla TD(0) on polynomially mixing Markov data, assuming only Holder continuity and bounded generalized gradients. This breaks with previous work, which often requires subsampling, projections, or instance-dependent step-sizes. Concretely, for mixing exponent , Holder continuity exponent , and step-size decay rate , we show that, with high probability, \[ \| \theta_t - \theta^* \| \leq C(\beta, \gamma, \eta)\, t^{-\beta/2} + C'(\gamma, \eta)\, t^{-\eta\gamma} \] after iterations. These bounds match the known i.i.d. rates and hold even when initialization is nonstationary. Central to our proof is a novel discrete-time coupling that bypasses geometric ergodicity, yielding the first such guarantee for nonlinear TD(0) under realistic mixing.
View on arXiv@article{sridhar2025_2502.05706, title={ Convergence of TD(0) under Polynomial Mixing with Nonlinear Function Approximation }, author={ Anupama Sridhar and Alexander Johansen }, journal={arXiv preprint arXiv:2502.05706}, year={ 2025 } }