Solving General-Utility Markov Decision Processes in the Single-Trial Regime with Online Planning

In this work, we contribute the first approach to solve infinite-horizon discounted general-utility Markov decision processes (GUMDPs) in the single-trial regime, i.e., when the agent's performance is evaluated based on a single trajectory. First, we provide some fundamental results regarding policy optimization in the single-trial regime, investigating which class of policies suffices for optimality, casting our problem as a particular MDP that is equivalent to our original problem, as well as studying the computational hardness of policy optimization in the single-trial regime. Second, we show how we can leverage online planning techniques, in particular a Monte-Carlo tree search algorithm, to solve GUMDPs in the single-trial regime. Third, we provide experimental results showcasing the superior performance of our approach in comparison to relevant baselines.
View on arXiv@article{santos2025_2505.15782, title={ Solving General-Utility Markov Decision Processes in the Single-Trial Regime with Online Planning }, author={ Pedro P. Santos and Alberto Sardinha and Francisco S. Melo }, journal={arXiv preprint arXiv:2505.15782}, year={ 2025 } }