13
0

Bandit Convex Optimisation Revisited: FTRL Achieves O~(t1/2)\tilde{O}(t^{1/2}) Regret

Abstract

We show that a kernel estimator using multiple function evaluations can be easily converted into a sampling-based bandit estimator with expectation equal to the original kernel estimate. Plugging such a bandit estimator into the standard FTRL algorithm yields a bandit convex optimisation algorithm that achieves O~(t1/2)\tilde{O}(t^{1/2}) regret against adversarial time-varying convex loss functions.

View on arXiv
Comments on this paper