Bellman operator convergence enhancements in reinforcement learning algorithms

This paper reviews the topological groundwork for the study of reinforcement learning (RL) by focusing on the structure of state, action, and policy spaces. We begin by recalling key mathematical concepts such as complete metric spaces, which form the foundation for expressing RL problems. By leveraging the Banach contraction principle, we illustrate how the Banach fixed-point theorem explains the convergence of RL algorithms and how Bellman operators, expressed as operators on Banach spaces, ensure this convergence. The work serves as a bridge between theoretical mathematics and practical algorithm design, offering new approaches to enhance the efficiency of RL. In particular, we investigate alternative formulations of Bellman operators and demonstrate their impact on improving convergence rates and performance in standard RL environments such as MountainCar, CartPole, and Acrobot. Our findings highlight how a deeper mathematical understanding of RL can lead to more effective algorithms for decision-making problems.
View on arXiv@article{kadurha2025_2505.14564, title={ Bellman operator convergence enhancements in reinforcement learning algorithms }, author={ David Krame Kadurha and Domini Jocema Leko Moutouo and Yae Ulrich Gaba }, journal={arXiv preprint arXiv:2505.14564}, year={ 2025 } }