Policy Optimization via Adv2: Adversarial Learning on Advantage Functions

We revisit the reduction of learning in adversarial Markov decision processes [MDPs] to adversarial learning based on --values; this reduction has been considered in a number of recent articles as one building block to perform policy optimization. Namely, we first consider and extend this reduction in an ideal setting where an oracle provides value functions: it may involve any adversarial learning strategy (not just exponential weights) and it may be based indifferently on --values or on advantage functions. We then present two extensions: on the one hand, convergence of the last iterate for a vast class of adversarial learning strategies (again, not just exponential weights), satisfying a property called monotonicity of weights; on the other hand, stronger regret criteria for learning in MDPs, inherited from the stronger regret criteria of adversarial learning called strongly adaptive regret and tracking regret. Third, we demonstrate how adversarial learning, also referred to as aggregation of experts, relates to aggregation (orchestration) of expert policies: we obtain stronger forms of performance guarantees in this setting than existing ones, via yet another, simple reduction. Finally, we discuss the impact of the reduction of learning in adversarial MDPs to adversarial learning in the practical scenarios where transition kernels are unknown and value functions must be learned. In particular, we review the literature and note that many strategies for policy optimization feature a policy-improvement step based on exponential weights with estimated --values. Our main message is that this step may be replaced by the application of any adversarial learning strategy on estimated --values or on estimated advantage functions. We leave the empirical evaluation of these twists for future research.
View on arXiv@article{jonckheere2025_2310.16473, title={ Policy Optimization via Adv2: Adversarial Learning on Advantage Functions }, author={ Matthieu Jonckheere and Chiara Mignacco and Gilles Stoltz }, journal={arXiv preprint arXiv:2310.16473}, year={ 2025 } }