265

Even Faster SVD Decomposition Yet Without Agonizing Pain

Neural Information Processing Systems (NeurIPS), 2016
Abstract

We study k-SVD that is to obtain the first k singular vectors of a matrix AA approximately. Recently, a few breakthroughs have been discovered on k-SVD: Musco and Musco [1] provided the first gap-free theorem for the block Krylov method, Shamir [2] discovered the first variance-reduction stochastic method, and Bhojanapalli et al. [3] provided the fastest O(nnz(A)+poly(1/ε))O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon))-type of algorithm using alternating minimization. In this paper, put forward a new framework for SVD and improve the above breakthroughs. We obtain faster gap-free convergence rate outperforming [1], we obtain the first accelerated AND stochastic method outperforming [2]. In the O(nnz(A)+poly(1/ε))O(\mathsf{nnz}(A) + \mathsf{poly}(1/\varepsilon)) running-time regime, we outperform [3] in certain parameter regimes without even using alternating minimization.

View on arXiv
Comments on this paper