We study the sample complexity of reducing reinforcement learning to a sequence of empirical risk minimization problems over the policy space. Such reductions-based algorithms exhibit local convergence in the function space, as opposed to the parameter space for policy gradient algorithms, and thus are unaffected by the possibly non-linear or discontinuous parameterization of the policy class. We propose a variance-reduced variant of Conservative Policy Iteration that improves the sample complexity of producing a -functional local optimum from to . Under state-coverage and policy-completeness assumptions, the algorithm enjoys -global optimality after sampling times, improving upon the previously established sample requirement.
View on arXiv