ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.02986
11
5

How Does an Approximate Model Help in Reinforcement Learning?

6 December 2019
Fei Feng
W. Yin
Lin F. Yang
ArXivPDFHTML
Abstract

One of the key approaches to save samples in reinforcement learning (RL) is to use knowledge from an approximate model such as its simulator. However, how much does an approximate model help to learn a near-optimal policy of the true unknown model? Despite numerous empirical studies of transfer reinforcement learning, an answer to this question is still elusive. In this paper, we study the sample complexity of RL while an approximate model of the environment is provided. For an unknown Markov decision process (MDP), we show that the approximate model can effectively reduce the complexity by eliminating sub-optimal actions from the policy searching space. In particular, we provide an algorithm that uses O~(N/(1−γ)3/ε2)\widetilde{O}(N/(1-\gamma)^3/\varepsilon^2)O(N/(1−γ)3/ε2) samples in a generative model to learn an ε\varepsilonε-optimal policy, where γ\gammaγ is the discount factor and NNN is the number of near-optimal actions in the approximate model. This can be much smaller than the learning-from-scratch complexity Θ~(SA/(1−γ)3/ε2)\widetilde{\Theta}(SA/(1-\gamma)^3/\varepsilon^2)Θ(SA/(1−γ)3/ε2), where SSS and AAA are the sizes of state and action spaces respectively. We also provide a lower bound showing that the above upper bound is nearly-tight if the value gap between near-optimal actions and sub-optimal actions in the approximate model is sufficiently large. Our results provide a very precise characterization of how an approximate model helps reinforcement learning when no additional assumption on the model is posed.

View on arXiv
Comments on this paper