Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an -th order tensor in of rank (where ), can variants of gradient descent find a rank decomposition where ? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least , while a variant of gradient descent can find an approximate tensor when . Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
View on arXiv