ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00320
61
0

kkk-SVD with Gradient Descent

1 February 2025
Emily Gan
Yassir Jedra
Devavrat Shah
ArXivPDFHTML
Abstract

We show that a gradient-descent with a simple, universal rule for step-size selection provably finds kkk-SVD, i.e., the k≥1k\geq 1k≥1 largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the \emph{exact-parameterized} and \emph{over-parameterized} settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron's method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of modern compute infrastructure for iterative optimization coupled with this work is likely to provide means to solve kkk-SVD for very large matrices.

View on arXiv
@article{gan2025_2502.00320,
  title={ $k$-SVD with Gradient Descent },
  author={ Emily Gan and Yassir Jedra and Devavrat Shah },
  journal={arXiv preprint arXiv:2502.00320},
  year={ 2025 }
}
Comments on this paper