ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.15890
26
5

First-order Policy Optimization for Robust Policy Evaluation

29 July 2023
Yuante Li
Guanghui Lan
    OffRL
ArXiv (abs)PDFHTML
Abstract

We adopt a policy optimization viewpoint towards policy evaluation for robust Markov decision process with s\mathrm{s}s-rectangular ambiguity sets. The developed method, named first-order policy evaluation (FRPE), provides the first unified framework for robust policy evaluation in both deterministic (offline) and stochastic (online) settings, with either tabular representation or generic function approximation. In particular, we establish linear convergence in the deterministic setting, and O~(1/ϵ2)\tilde{\mathcal{O}}(1/\epsilon^2)O~(1/ϵ2) sample complexity in the stochastic setting. FRPE also extends naturally to evaluating the robust state-action value function with (s,a)(\mathrm{s}, \mathrm{a})(s,a)-rectangular ambiguity sets. We discuss the application of the developed results for stochastic policy optimization of large-scale robust MDPs.

View on arXiv
Comments on this paper