15
1

Nonstochastic Bandits with Infinitely Many Experts

Abstract

We study the problem of nonstochastic bandits with expert advice, extending the setting from finitely many experts to any countably infinite set: A learner aims to maximize the total reward by taking actions sequentially based on bandit feedback while benchmarking against a set of experts. We propose a variant of Exp4.P that, for finitely many experts, enables inference of correct expert rankings while preserving the order of the regret upper bound. We then incorporate the variant into a meta-algorithm that works on infinitely many experts. We prove a high-probability upper bound of O~(iK+KT)\tilde{\mathcal{O}} \big( i^*K + \sqrt{KT} \big) on the regret, up to polylog factors, where ii^* is the unknown position of the best expert, KK is the number of actions, and TT is the time horizon. We also provide an example of structured experts and discuss how to expedite learning in such case. Our meta-learning algorithm achieves optimal regret up to polylog factors when i=O~(T/K)i^* = \tilde{\mathcal{O}} \big( \sqrt{T/K} \big). If a prior distribution is assumed to exist for ii^*, the probability of optimality increases with TT, the rate of which can be fast.

View on arXiv
Comments on this paper