ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.4659
14
111

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

15 December 2014
Qing Qu
Ju Sun
John N. Wright
ArXivPDFHTML
Abstract

Is it possible to find the sparsest vector (direction) in a generic subspace S⊆Rp\mathcal{S} \subseteq \mathbb{R}^pS⊆Rp with dim(S)=n<p\mathrm{dim}(\mathcal{S})= n < pdim(S)=n<p? This problem can be considered a homogeneous variant of the sparse recovery problem, and finds connections to sparse dictionary learning, sparse PCA, and many other problems in signal processing and machine learning. In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace. Simple convex heuristics for this planted recovery problem provably break down when the fraction of nonzero entries in the target sparse vector substantially exceeds O(1/n)O(1/\sqrt{n})O(1/n​). In contrast, we exhibit a relatively simple nonconvex approach based on alternating directions, which provably succeeds even when the fraction of nonzero entries is Ω(1)\Omega(1)Ω(1). To the best of our knowledge, this is the first practical algorithm to achieve linear scaling under the planted sparse model. Empirically, our proposed algorithm also succeeds in more challenging data models, e.g., sparse dictionary learning.

View on arXiv
Comments on this paper