908

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

Mathematics of Operations Research (MOR), 2020
Abstract

We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class F\mathcal{F}. We design a fast and simple algorithm that achieves the statistically optimal regret with only O(logT){O}(\log T) calls to an offline least-squares regression oracle across all TT rounds (the number of oracle calls can be further reduced to O(loglogT)O(\log\log T) if TT is known in advance). Our algorithm provides the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem for the realizable setting of contextual bandits. Our algorithm is also the first provably optimal contextual bandit algorithm with a logarithmic number of oracle calls.

View on arXiv
Comments on this paper