17
2

Scale-Free Adversarial Multi-Armed Bandit with Arbitrary Feedback Delays

Abstract

We consider the Scale-Free Adversarial Multi-Armed Bandit (MAB) problem with unrestricted feedback delays. In contrast to the standard assumption that all losses are [0,1][0,1]-bounded, in our setting, losses can fall in a general bounded interval [L,L][-L, L], unknown to the agent beforehand. Furthermore, the feedback of each arm pull can experience arbitrary delays. We propose a novel approach named Scale-Free Delayed INF (SFD-INF) for this novel setting, which combines a recent "convex combination trick" together with a novel doubling and skipping technique. We then present two instances of SFD-INF, each with carefully designed delay-adapted learning scales. The first one SFD-TINF uses 12\frac 12-Tsallis entropy regularizer and can achieve O~(K(D+T)L)\widetilde{\mathcal O}(\sqrt{K(D+T)}L) regret when the losses are non-negative, where KK is the number of actions, TT is the number of steps, and DD is the total feedback delay. This bound nearly matches the Ω((KT+DlogK)L)\Omega((\sqrt{KT}+\sqrt{D\log K})L) lower-bound when regarding KK as a constant independent of TT. The second one, SFD-LBINF, works for general scale-free losses and achieves a small-loss style adaptive regret bound O~(KE[L~T2]+KDL)\widetilde{\mathcal O}(\sqrt{K\mathbb{E}[\tilde{\mathfrak L}_T^2]}+\sqrt{KDL}), which falls to the O~(K(D+T)L)\widetilde{\mathcal O}(\sqrt{K(D+T)}L) regret in the worst case and is thus more general than SFD-TINF despite a more complicated analysis and several extra logarithmic dependencies. Moreover, both instances also outperform the existing algorithms for non-delayed (i.e., D=0D=0) scale-free adversarial MAB problems, which can be of independent interest.

View on arXiv
Comments on this paper