ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.02135
16
0

UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms

5 May 2021
Denis Belomestny
I. Levin
Eric Moulines
A. Naumov
S. Samsonov
V. Zorina
    OffRL
ArXivPDFHTML
Abstract

Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). Yet even a precise knowledge of the value function VπV^{\pi}Vπ corresponding to a policy π\piπ does not provide reliable information on how far is the policy π\piπ from the optimal one. We present a novel model-free upper value iteration procedure (UVIP)({\sf UVIP})(UVIP) that allows us to estimate the suboptimality gap V⋆(x)−Vπ(x)V^{\star}(x) - V^{\pi}(x)V⋆(x)−Vπ(x) from above and to construct confidence intervals for V⋆V^\starV⋆. Our approach relies on upper bounds to the solution of the Bellman optimality equation via martingale approach. We provide theoretical guarantees for UVIP{\sf UVIP}UVIP under general assumptions and illustrate its performance on a number of benchmark RL problems.

View on arXiv
Comments on this paper