This study considers the partial monitoring problem with -actions and -outcomes and provides the first best-of-both-worlds algorithms, whose regrets are favorably bounded both in the stochastic and adversarial regimes. In particular, we show that for non-degenerate locally observable games, the regret is in the stochastic regime and in the adversarial regime, where is the number of rounds, is the maximum number of distinct observations per action, is the minimum suboptimality gap, and is the number of Pareto optimal actions. Moreover, we show that for globally observable games, the regret is in the stochastic regime and in the adversarial regime, where is a game-dependent constant. We also provide regret bounds for a stochastic regime with adversarial corruptions. Our algorithms are based on the follow-the-regularized-leader framework and are inspired by the approach of exploration by optimization and the adaptive learning rate in the field of online learning with feedback graphs.
View on arXiv