ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.00692
73
5
v1v2 (latest)

Compressed Dictionary Learning

2 May 2018
Karin Schnass
Flávio C. A. Teixeira
ArXiv (abs)PDFHTML
Abstract

In this paper we show that the computational complexity of the Iterative Thresholding and K-Residual-Means (ITKrM) algorithm for dictionary learning can be significantly reduced by using dimensionality reduction techniques based on the Johnson-Lindenstrauss Lemma. We introduce the Iterative Compressed-Thresholding and K-Means (IcTKM) algorithm for fast dictionary learning and study its convergence properties. We show that IcTKM can locally recover a generating dictionary with low computational complexity up to a target error ε~\tilde{\varepsilon}ε~ by compressing ddd-dimensional training data into m<dm < dm<d dimensions, where mmm is proportional to log⁡d\log dlogd and inversely proportional to the distortion level δ\deltaδ incurred by compressing the data. Increasing the distortion level δ\deltaδ reduces the computational complexity of IcTKM at the cost of an increased recovery error and reduced admissible sparsity level for the training data. For generating dictionaries comprised of KKK atoms, we show that IcTKM can stably recover the dictionary with distortion levels up to the order δ≤O(1/log⁡K)\delta \leq O(1/\sqrt{\log K})δ≤O(1/logK​). The compression effectively shatters the data dimension bottleneck in the computational cost of the ITKrM algorithm. For training data with sparsity levels S≤O(K2/3)S \leq O(K^{2/3})S≤O(K2/3), ITKrM can locally recover the dictionary with a computational cost that scales as O(dKlog⁡(ε~−1))O(d K \log(\tilde{\varepsilon}^{-1}))O(dKlog(ε~−1)) per training signal. We show that for these same sparsity levels the computational cost can be brought down to O(log⁡5(d)Klog⁡(ε~−1))O(\log^5 (d) K \log(\tilde{\varepsilon}^{-1}))O(log5(d)Klog(ε~−1)) with IcTKM, a significant reduction when high-dimensional data is considered. Our theoretical results are complemented with numerical simulations which demonstrate that IcTKM is a powerful, low-cost algorithm for learning dictionaries from high-dimensional data sets.

View on arXiv
Comments on this paper