Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits

Abstract
We give an oracle-based algorithm for the adversarial contextual bandit problem, where either contexts are drawn i.i.d. or the sequence of contexts is known a priori, but where the losses are picked adversarially. Our algorithm is computationally efficient, assuming access to an offline optimization oracle, and enjoys a regret of order , where is the number of actions, is the number of iterations and is the number of baseline policies. Our result is the first to break the barrier that is achieved by recently introduced algorithms. Breaking this barrier was left as a major open problem. Our analysis is based on the recent relaxation based approach of (Rakhlin and Sridharan, 2016).
View on arXivComments on this paper