ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.08281
19
3

Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation

17 September 2022
Shinsaku Sakaue
Taihei Oki
ArXivPDFHTML
Abstract

Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-kkk approximation using an m×nm \times nm×n learned sketching matrix with sss non-zeros in each column, they proved an O~(nsm)\tilde{\mathrm{O}}(nsm)O~(nsm) bound on the \emph{fat shattering dimension} (O~\tilde{\mathrm{O}}O~ hides logarithmic factors). We build on their work and make two contributions. 1. We present a better O~(nsk)\tilde{\mathrm{O}}(nsk)O~(nsk) bound (k≤mk \le mk≤m). En route to obtaining this result, we give a low-complexity \emph{Goldberg--Jerrum algorithm} for computing pseudo-inverse matrices, which would be of independent interest. 2. We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by O(nslog⁡n){\mathrm{O}}(ns\log n)O(nslogn). In addition, experiments confirm the practical benefit of learning sparsity patterns.

View on arXiv
Comments on this paper