96
1

An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits

Abstract

We present an oracle-efficient relaxation for the adversarial contextual bandits problem, where the contexts are sequentially drawn i.i.d from a known distribution and the cost sequence is chosen by an online adversary. Our algorithm has a regret bound of O(T23(Klog(Π))13)O(T^{\frac{2}{3}}(K\log(|\Pi|))^{\frac{1}{3}}) and makes at most O(K)O(K) calls per round to an offline optimization oracle, where KK denotes the number of actions, TT denotes the number of rounds and Π\Pi denotes the set of policies. This is the first result to improve the prior best bound of O((TK)23(log(Π))13)O((TK)^{\frac{2}{3}}(\log(|\Pi|))^{\frac{1}{3}}) as obtained by Syrgkanis et al. at NeurIPS 2016, and the first to match the original bound of Langford and Zhang at NeurIPS 2007 which was obtained for the stochastic case.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.