24
0

Multiple-policy Evaluation via Density Estimation

Abstract

We study the multiple-policy evaluation problem where we are given a set of KK policies and the goal is to evaluate their performance (expected total reward over a fixed horizon) to an accuracy ϵ\epsilon with probability at least 1δ1-\delta. We propose an algorithm named CAESAR\mathrm{CAESAR} for this problem. Our approach is based on computing an approximate optimal offline sampling distribution and using the data sampled from it to perform the simultaneous estimation of the policy values. CAESAR\mathrm{CAESAR} has two phases. In the first we produce coarse estimates of the visitation distributions of the target policies at a low order sample complexity rate that scales with O~(1ϵ)\tilde{O}(\frac{1}{\epsilon}). In the second phase, we approximate the optimal offline sampling distribution and compute the importance weighting ratios for all target policies by minimizing a step-wise quadratic loss function inspired by the DualDICE \cite{nachum2019dualdice} objective. Up to low order and logarithmic terms CAESAR\mathrm{CAESAR} achieves a sample complexity O~(H4ϵ2h=1Hmaxk[K]s,a(dhπk(s,a))2μh(s,a))\tilde{O}\left(\frac{H^4}{\epsilon^2}\sum_{h=1}^H\max_{k\in[K]}\sum_{s,a}\frac{(d_h^{\pi^k}(s,a))^2}{\mu^*_h(s,a)}\right), where dπd^{\pi} is the visitation distribution of policy π\pi, μ\mu^* is the optimal sampling distribution, and HH is the horizon.

View on arXiv
Comments on this paper