Many problems in machine learning can be reduced to learning a low-rank positive semidefinite matrix (denoted as ), which encounters semidefinite program (SDP). Existing SDP solvers by classical convex optimization are expensive to solve large-scale problems. Employing the low rank of solution, Burer-Monteiro's method reformulated SDP as a nonconvex problem via the factorization ( as ). However, this would lose the structure of problem in optimization. In this paper, we propose to convert SDP into a biconvex problem via the factorization ( as ), and while adding the term to penalize the difference of and . Thus, the biconvex structure (w.r.t. and ) can be exploited naturally in optimization. As a theoretical result, we provide a bound to the penalty parameter under the assumption of -Lipschitz smoothness and -strongly biconvexity, such that, at stationary points, the proposed bilinear factorization is equivalent to Burer-Monteiro's factorization when the bound is arrived, that is . Our proposal opens up a new way to surrogate SDP by biconvex program. Experiments on two SDP-related applications demonstrate that the proposed method is effective as the state-of-the-art.
View on arXiv