ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1401.5833
80
8
v1v2v3 (latest)

Multiscale Dictionary Learning: Non-Asymptotic Bounds and Robustness

23 January 2014
Mauro Maggioni
Stanislav Minsker
Nate Strawn
ArXiv (abs)PDFHTML
Abstract

High-dimensional data sets often exhibit inherently low-dimensional structure. Over the past decade, this empirical fact has motivated researchers to study the detection, measurement, and exploitation of such low-dimensional structure, as well as numerous implications for high-dimensional statistics, machine learning, and signal processing. Manifold learning (where the low-dimensional structure is a manifold) and dictionary learning (where the low-dimensional structure is the set of sparse linear combinations of vectors from a finite dictionary) are two prominent theoretical and computational frameworks in this area and, despite their ostensible distinction, the recently-introduced Geometric Multi-Resolution Analysis (GMRA) provides a robust, computationally efficient, multiscale procedure for simultaneously learning a manifold and a dictionary. In this work, we prove non-asymptotic probabilistic bounds on the approximation error of GMRA for a rich class of underlying models that includes "noisy" manifolds, thus theoretically establishing the robustness of the procedure and confirming empirical observations. In particular, if the data aggregates near a low-dimensional manifold, our results show that the approximation error primarily depends on the intrinsic dimension of the manifold, and is independent of the ambient dimension. Our work thus establishes GMRA as a provably fast algorithm for dictionary learning with approximation and sparsity guarantees. We perform numerical experiments that further confirm our theoretical results.

View on arXiv
Comments on this paper