Achieving Tractable Minimax Optimal Regret in Average Reward MDPs

In recent years, significant attention has been directed towards learning average-reward Markov Decision Processes (MDPs). However, existing algorithms either suffer from sub-optimal regret guarantees or computational inefficiencies. In this paper, we present the first tractable algorithm with minimax optimal regret of , where is the span of the optimal bias function , is the size of the state-action space and the number of learning steps. Remarkably, our algorithm does not require prior information on . Our algorithm relies on a novel subroutine, Projected Mitigated Extended Value Iteration (PMEVI), to compute bias-constrained optimal policies efficiently. This subroutine can be applied to various previous algorithms to improve regret bounds.
View on arXiv