ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.09613
25
17

Model-based Reinforcement Learning for Continuous Control with Posterior Sampling

20 November 2020
Ying Fan
Yifei Ming
ArXivPDFHTML
Abstract

Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically. First, we show the first regret bound of PSRL in continuous spaces which is polynomial in the episode length to the best of our knowledge. With the assumption that reward and transition functions can be modeled by Bayesian linear regression, we develop a regret bound of O~(H3/2dT)\tilde{O}(H^{3/2}d\sqrt{T})O~(H3/2dT​), where HHH is the episode length, ddd is the dimension of the state-action space, and TTT indicates the total time steps. This result matches the best-known regret bound of non-PSRL methods in linear MDPs. Our bound can be extended to nonlinear cases as well with feature embedding: using linear kernels on the feature representation ϕ\phiϕ, the regret bound becomes O~(H3/2dϕT)\tilde{O}(H^{3/2}d_{\phi}\sqrt{T})O~(H3/2dϕ​T​), where dϕd_\phidϕ​ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models, we use Bayesian linear regression on the penultimate layer (the feature representation layer ϕ\phiϕ) of neural networks. Empirical results show that our algorithm achieves the state-of-the-art sample efficiency in benchmark continuous control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.

View on arXiv
Comments on this paper