ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.16998
17
1

Does Sparsity Help in Learning Misspecified Linear Bandits?

29 March 2023
Jialin Dong
Lin F. Yang
ArXivPDFHTML
Abstract

Recently, the study of linear misspecified bandits has generated intriguing implications of the hardness of learning in bandits and reinforcement learning (RL). In particular, Du et al. (2020) show that even if a learner is given linear features in Rd\mathbb{R}^dRd that approximate the rewards in a bandit or RL with a uniform error of ε\varepsilonε, searching for an O(ε)O(\varepsilon)O(ε)-optimal action requires pulling at least Ω(exp⁡(d))\Omega(\exp(d))Ω(exp(d)) queries. Furthermore, Lattimore et al. (2020) show that a degraded O(εd)O(\varepsilon\sqrt{d})O(εd​)-optimal solution can be learned within poly⁡(d/ε)\operatorname{poly}(d/\varepsilon)poly(d/ε) queries. Yet it is unknown whether a structural assumption on the ground-truth parameter, such as sparsity, could break the εd\varepsilon\sqrt{d}εd​ barrier. In this paper, we address this question by showing that algorithms can obtain O(ε)O(\varepsilon)O(ε)-optimal actions by querying O(ε−sds)O(\varepsilon^{-s}d^s)O(ε−sds) actions, where sss is the sparsity parameter, removing the exp⁡(d)\exp(d)exp(d)-dependence. We then establish information-theoretical lower bounds, i.e., Ω(exp⁡(s))\Omega(\exp(s))Ω(exp(s)), to show that our upper bound on sample complexity is nearly tight if one demands an error O(sδε) O(s^{\delta}\varepsilon)O(sδε) for 0<δ<10<\delta<10<δ<1. For δ≥1\delta\geq 1δ≥1, we further show that poly⁡(s/ε)\operatorname{poly}(s/\varepsilon)poly(s/ε) queries are possible when the linear features are "good" and even in general settings. These results provide a nearly complete picture of how sparsity can help in misspecified bandit learning and provide a deeper understanding of when linear features are "useful" for bandit and reinforcement learning with misspecification.

View on arXiv
Comments on this paper