ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.12699
39
107

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

28 March 2020
D. Simchi-Levi
Yunzong Xu
    OffRL
ArXivPDFHTML
Abstract

We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class F\mathcal{F}F. We design a fast and simple algorithm that achieves the statistically optimal regret with only O(log⁡T){O}(\log T)O(logT) calls to an offline regression oracle across all TTT rounds. The number of oracle calls can be further reduced to O(log⁡log⁡T)O(\log\log T)O(loglogT) if TTT is known in advance. Our results provide the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem in the contextual bandit literature. A direct consequence of our results is that any advances in offline regression immediately translate to contextual bandits, statistically and computationally. This leads to faster algorithms and improved regret guarantees for broader classes of contextual bandit problems.

View on arXiv
Comments on this paper