A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements

Abstract
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With random measurements of a positive semidefinite matrix of rank and condition number , our method is guaranteed to converge linearly to the global optimum.
View on arXivComments on this paper