ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.03168
15
55

Finding Low-Rank Solutions via Non-Convex Matrix Factorization, Efficiently and Provably

10 June 2016
Dohyung Park
Anastasios Kyrillidis
C. Caramanis
Sujay Sanghavi
ArXivPDFHTML
Abstract

A rank-rrr matrix X∈Rm×nX \in \mathbb{R}^{m \times n}X∈Rm×n can be written as a product UV⊤U V^\topUV⊤, where U∈Rm×rU \in \mathbb{R}^{m \times r}U∈Rm×r and V∈Rn×rV \in \mathbb{R}^{n \times r}V∈Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X)f(X)f(X) over rank-rrr matrices, where the set of rank-rrr matrices is modeled via the factorization UV⊤UV^\topUV⊤. Though such parameterization reduces the number of variables, and is more computationally efficient (of particular interest is the case r≪min⁡{m,n}r \ll \min\{m, n\}r≪min{m,n}), it comes at a cost: f(UV⊤)f(UV^\top)f(UV⊤) becomes a non-convex function w.r.t. UUU and VVV. We study such parameterization for optimization of generic convex objectives fff, and focus on first-order, gradient descent algorithmic solutions. We propose the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U,VU, VU,V factors. We show that when fff is (restricted) smooth, BFGD has local sublinear convergence, and linear convergence when fff is both (restricted) smooth and (restricted) strongly convex. For several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.

View on arXiv
Comments on this paper