Q-learning with Posterior Sampling
- OffRLGP

Bayesian posterior sampling techniques have demonstrated superior empirical performance in many exploration-exploitation settings. However, their theoretical analysis remains a challenge, especially in complex settings like reinforcement learning. In this paper, we introduce Q-Learning with Posterior Sampling (PSQL), a simple Q-learning-based algorithm that uses Gaussian posteriors on Q-values for exploration, akin to the popular Thompson Sampling algorithm in the multi-armed bandit setting. We show that in the tabular episodic MDP setting, PSQL achieves a regret bound of , closely matching the known lower bound of . Here, S, A denote the number of states and actions in the underlying Markov Decision Process (MDP), and with being the number of episodes and being the planning horizon. Our work provides several new technical insights into the core challenges in combining posterior sampling with dynamic programming and TD-learning-based RL algorithms, along with novel ideas for resolving those difficulties. We hope this will form a starting point for analyzing this efficient and important algorithmic technique in even more complex RL settings.
View on arXiv@article{agrawal2025_2506.00917, title={ Q-learning with Posterior Sampling }, author={ Priyank Agrawal and Shipra Agrawal and Azmat Azati }, journal={arXiv preprint arXiv:2506.00917}, year={ 2025 } }