ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.02250
61
18

Online Sparse Linear Regression

7 March 2016
Dean Phillips Foster
Satyen Kale
H. Karloff
ArXiv (abs)PDFHTML
Abstract

We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an inefficient algorithm that obtains regret bounded by O~(T)\tilde{O}(\sqrt{T})O~(T​) after TTT prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by O(T1−δ)O(T^{1-\delta})O(T1−δ) for any constant δ>0\delta > 0δ>0 unless NP⊆BPP\text{NP} \subseteq \text{BPP}NP⊆BPP. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.

View on arXiv
Comments on this paper