ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.00531
16
89

Model selection for contextual bandits

3 June 2019
Dylan J. Foster
A. Krishnamurthy
Haipeng Luo
    OffRL
ArXivPDFHTML
Abstract

We introduce the problem of model selection for contextual bandits, where a learner must adapt to the complexity of the optimal policy while balancing exploration and exploitation. Our main result is a new model selection guarantee for linear contextual bandits. We work in the stochastic realizable setting with a sequence of nested linear policy classes of dimension d1<d2<…d_1 < d_2 < \ldotsd1​<d2​<…, where the m⋆m^\starm⋆-th class contains the optimal policy, and we design an algorithm that achieves O~(T2/3dm⋆1/3)\tilde{O}(T^{2/3}d^{1/3}_{m^\star})O~(T2/3dm⋆1/3​) regret with no prior knowledge of the optimal dimension dm⋆d_{m^\star}dm⋆​. The algorithm also achieves regret O~(T3/4+Tdm⋆)\tilde{O}(T^{3/4} + \sqrt{Td_{m^\star}})O~(T3/4+Tdm⋆​​), which is optimal for dm⋆≥Td_{m^{\star}}\geq{}\sqrt{T}dm⋆​≥T​. This is the first model selection result for contextual bandits with non-vacuous regret for all values of dm⋆d_{m^\star}dm⋆​, and to the best of our knowledge is the first positive result of this type for any online learning setting with partial information. The core of the algorithm is a new estimator for the gap in the best loss achievable by two linear policy classes, which we show admits a convergence rate faster than the rate required to learn the parameters for either class.

View on arXiv
Comments on this paper