ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12091
28
2

Approximate Guarantees for Dictionary Learning

28 May 2019
Aditya Bhaskara
W. Tai
ArXiv (abs)PDFHTML
Abstract

In the dictionary learning (or sparse coding) problem, we are given a collection of signals (vectors in Rd\mathbb{R}^dRd), and the goal is to find a "basis" in which the signals have a sparse (approximate) representation. The problem has received a lot of attention in signal processing, learning, and theoretical computer science. The problem is formalized as factorizing a matrix X(d×n)X (d \times n)X(d×n) (whose columns are the signals) as X=AYX = AYX=AY, where AAA has a prescribed number mmm of columns (typically m≪nm \ll nm≪n), and YYY has columns that are kkk-sparse (typically k≪dk \ll dk≪d). Most of the known theoretical results involve assuming that the columns of the unknown AAA have certain incoherence properties, and that the coefficient matrix YYY has random (or partly random) structure. The goal of our work is to understand what can be said in the absence of such assumptions. Can we still find AAA and YYY such that X≈AYX \approx AYX≈AY? We show that this is possible, if we allow violating the bounds on mmm and kkk by appropriate factors that depend on kkk and the desired approximation. Our results rely on an algorithm for what we call the threshold correlation problem, which turns out to be related to hypercontractive norms of matrices. We also show that our algorithmic ideas apply to a setting in which some of the columns of XXX are outliers, thus giving similar guarantees even in this challenging setting.

View on arXiv
Comments on this paper