ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.02612
27
33

Problem-Complexity Adaptive Model Selection for Stochastic Linear Bandits

4 June 2020
Avishek Ghosh
Abishek Sankararaman
Kannan Ramchandran
ArXivPDFHTML
Abstract

We consider the problem of model selection for two popular stochastic linear bandit settings, and propose algorithms that adapts to the unknown problem complexity. In the first setting, we consider the KKK armed mixture bandits, where the mean reward of arm i∈[K]i \in [K]i∈[K], is μi+⟨αi,t,θ∗⟩\mu_i+ \langle \alpha_{i,t},\theta^* \rangle μi​+⟨αi,t​,θ∗⟩, with αi,t∈Rd\alpha_{i,t} \in \mathbb{R}^dαi,t​∈Rd being the known context vector and μi∈[−1,1]\mu_i \in [-1,1]μi​∈[−1,1] and θ∗\theta^*θ∗ are unknown parameters. We define ∥θ∗∥\|\theta^*\|∥θ∗∥ as the problem complexity and consider a sequence of nested hypothesis classes, each positing a different upper bound on ∥θ∗∥\|\theta^*\|∥θ∗∥. Exploiting this, we propose Adaptive Linear Bandit (ALB), a novel phase based algorithm that adapts to the true problem complexity, ∥θ∗∥\|\theta^*\|∥θ∗∥. We show that ALB achieves regret scaling of O(∥θ∗∥T)O(\|\theta^*\|\sqrt{T})O(∥θ∗∥T​), where ∥θ∗∥\|\theta^*\|∥θ∗∥ is apriori unknown. As a corollary, when θ∗=0\theta^*=0θ∗=0, ALB recovers the minimax regret for the simple bandit algorithm without such knowledge of θ∗\theta^*θ∗. ALB is the first algorithm that uses parameter norm as model section criteria for linear bandits. Prior state of art algorithms \cite{osom} achieve a regret of O(LT)O(L\sqrt{T})O(LT​), where LLL is the upper bound on ∥θ∗∥\|\theta^*\|∥θ∗∥, fed as an input to the problem. In the second setting, we consider the standard linear bandit problem (with possibly an infinite number of arms) where the sparsity of θ∗\theta^*θ∗, denoted by d∗≤dd^* \leq dd∗≤d, is unknown to the algorithm. Defining d∗d^*d∗ as the problem complexity, we show that ALB achieves O(d∗T)O(d^*\sqrt{T})O(d∗T​) regret, matching that of an oracle who knew the true sparsity level. This methodology is then extended to the case of finitely many arms and similar results are proven. This is the first algorithm that achieves such model selection guarantees. We further verify our results via synthetic and real-data experiments.

View on arXiv
Comments on this paper