ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05335
66
18

Online Learning for Stochastic Shortest Path Model via Posterior Sampling

9 June 2021
Mehdi Jafarnia-Jahromi
Liyu Chen
Rahul Jain
Haipeng Luo
    OffRL
ArXivPDFHTML
Abstract

We consider the problem of online reinforcement learning for the Stochastic Shortest Path (SSP) problem modeled as an unknown MDP with an absorbing state. We propose PSRL-SSP, a simple posterior sampling-based reinforcement learning algorithm for the SSP problem. The algorithm operates in epochs. At the beginning of each epoch, a sample is drawn from the posterior distribution on the unknown model dynamics, and the optimal policy with respect to the drawn sample is followed during that epoch. An epoch completes if either the number of visits to the goal state in the current epoch exceeds that of the previous epoch, or the number of visits to any of the state-action pairs is doubled. We establish a Bayesian regret bound of O(B⋆SAK)O(B_\star S\sqrt{AK})O(B⋆​SAK​), where B⋆B_\starB⋆​ is an upper bound on the expected cost of the optimal policy, SSS is the size of the state space, AAA is the size of the action space, and KKK is the number of episodes. The algorithm only requires the knowledge of the prior distribution, and has no hyper-parameters to tune. It is the first such posterior sampling algorithm and outperforms numerically previously proposed optimism-based algorithms.

View on arXiv
Comments on this paper