Bypassing the Monster: A Faster and Simpler Optimal Algorithm for
Contextual Bandits under Realizability
- OffRL
We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class . We design a fast and simple algorithm that achieves the statistically optimal regret with only calls to an offline least-squares regression oracle across all rounds (the number of oracle calls can be further reduced to if is known in advance). Our algorithm provides the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem for the realizable setting of contextual bandits. Our algorithm is also the first provably optimal contextual bandit algorithm with a logarithmic number of oracle calls.
View on arXiv