133

EUBRL: Epistemic Uncertainty Directed Bayesian Reinforcement Learning

Jianfei Ma
Wee Sun Lee
Main:10 Pages
6 Figures
Bibliography:4 Pages
4 Tables
Appendix:46 Pages
Abstract

At the boundary between the known and the unknown, an agent inevitably confronts the dilemma of whether to explore or to exploit. Epistemic uncertainty reflects such boundaries, representing systematic uncertainty due to limited knowledge. In this paper, we propose a Bayesian reinforcement learning (RL) algorithm, EUBRL\texttt{EUBRL}, which leverages epistemic guidance to achieve principled exploration. This guidance adaptively reduces per-step regret arising from estimation errors. We establish nearly minimax-optimal regret and sample complexity guarantees for a class of sufficiently expressive priors in infinite-horizon discounted MDPs. Empirically, we evaluate EUBRL\texttt{EUBRL} on tasks characterized by sparse rewards, long horizons, and stochasticity. Results demonstrate that EUBRL\texttt{EUBRL} achieves superior sample efficiency, scalability, and consistency.

View on arXiv
Comments on this paper