ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1309.5310
82
21
v1v2v3v4 (latest)

Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression

20 September 2013
W. Bajwa
Marco F. Duarte
A. Calderbank
ArXiv (abs)PDFHTML
Abstract

This paper makes several contributions toward the "underdetermined" setting in linear models when the set of observations is given by a linear combination of a small number of groups of columns of a dictionary, termed the "block-sparse" case. First, it specifies conditions on the dictionary under which most block submatrices of the dictionary are well conditioned. In contrast to earlier works in block-sparse inference, this result is fundamentally different because (i) it provides conditions that can be explicitly computed in polynomial time, (ii) the provided conditions translate into near-optimal scaling of the number of columns of the block subdictionaries as a function of the number of observations for a large class of dictionaries, and (iii) it suggests that the spectral norm, rather than the column/block coherences, of the dictionary fundamentally limits the dimensions of the well-conditioned block subdictionaries. Second, in order to help understand the significance of this result in the context of block-sparse inference, this paper investigates the problems of block-sparse recovery and block-sparse regression in underdetermined settings. In both of these problems, this paper utilizes its result concerning conditioning of block subdictionaries and establishes that near-optimal block-sparse recovery and block-sparse regression is possible for a large class of dictionaries as long as the dictionary satisfies easily computable conditions and the coefficients describing the linear combination of groups of columns can be modeled through a mild statistical prior. Third, the paper reports carefully constructed numerical experiments that highlight the effects of different measures of the dictionary in block-sparse inference problems.

View on arXiv
Comments on this paper