Fighting Bandits with a New Kind of Smoothness

Abstract
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the \emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as if the perturbation distribution has a bounded hazard rate. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.
View on arXivComments on this paper