111
324

Contextual Bandit Algorithms with Supervised Learning Guarantees

Abstract

We address the problem of learning in an online, bandit setting where the learner must repeatedly select among KK actions, but only receives partial feedback based on its choices. We establish two new facts: First, using a new algorithm called Exp4.P, we show that it is possible to compete with the best in a set of NN experts with probability 1δ1-\delta while incurring regret at most O(KTln(N/δ))O(\sqrt{KT\ln(N/\delta)}) over TT time steps. The new algorithm is tested empirically in a large-scale, real-world dataset. Second, we give a new algorithm called VE that competes with a possibly infinite set of policies of VC-dimension dd while incurring regret at most O(T(dln(T)+ln(1/δ)))O(\sqrt{T(d\ln(T) + \ln (1/\delta))}) with probability 1δ1-\delta. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.

View on arXiv
Comments on this paper