22
16

The sample complexity of weighted sparse approximation

Abstract

For Gaussian sampling matrices, we provide bounds on the minimal number of measurements mm required to achieve robust weighted sparse recovery guarantees in terms of how well a given prior model for the sparsity support aligns with the true underlying support. Our main contribution is that for a sparse vector xRN{\bf x} \in \mathbb{R}^N supported on an unknown set S{1,,N}\mathcal{S} \subset \{1, \dots, N\} with Sk|\mathcal{S}|\leq k, if S\mathcal{S} has \emph{weighted cardinality} ω(S):=jSωj2\omega(\mathcal{S}) := \sum_{j \in \mathcal{S}} \omega_j^2, and if the weights on Sc\mathcal{S}^c exhibit mild growth, ωj2γlog(j/ω(S))\omega_j^2 \geq \gamma \log(j/\omega(\mathcal{S})) for jScj\in\mathcal{S}^c and γ>0\gamma > 0, then the sample complexity for sparse recovery via weighted 1\ell_1-minimization using weights ωj\omega_j is linear in the weighted sparsity level, and m=O(ω(S)/γ)m = \mathcal{O}(\omega(\mathcal{S})/\gamma). This main result is a generalization of special cases including a) the standard sparse recovery setting where all weights ωj1\omega_j \equiv 1, and m=O(klog(N/k))m = \mathcal{O}\left(k\log\left(N/k\right)\right); b) the setting where the support is known a priori, and m=O(k)m = \mathcal{O}(k); and c) the setting of sparse recovery with prior information, and mm depends on how well the weights are aligned with the support set S\mathcal{S}. We further extend the results in case c) to the setting of additive noise. Our results are {\em nonuniform} that is they apply for a fixed support, unknown a priori, and the weights on S\mathcal{S} do not all have to be smaller than the weights on Sc\mathcal{S}^c for our recovery results to hold.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.