Rank-One Modified Value Iteration

In this paper, we provide a novel algorithm for solving planning and learning problems of Markov decision processes. The proposed algorithm follows a policy iteration-type update by using a rank-one approximation of the transition probability matrix in the policy evaluation step. This rank-one approximation is closely related to the stationary distribution of the corresponding transition probability matrix, which is approximated using the power method. We provide theoretical guarantees for the convergence of the proposed algorithm to optimal (action-)value function with the same rate and computational complexity as the value iteration algorithm in the planning problem and as the Q-learning algorithm in the learning problem. Through our extensive numerical simulations, however, we show that the proposed algorithm consistently outperforms first-order algorithms and their accelerated versions for both planning and learning problems.
View on arXiv@article{kolarijani2025_2505.01828, title={ Rank-One Modified Value Iteration }, author={ Arman Sharifi Kolarijani and Tolga Ok and Peyman Mohajerin Esfahani and Mohamad Amin Sharif Kolarijani }, journal={arXiv preprint arXiv:2505.01828}, year={ 2025 } }