Gradient Descent for Deep Matrix Factorization: Dynamics and Implicit Bias towards Low Rank

In many deep learning scenarios more network parameters than training examples are used. In such situations often several networks can be found that exactly interpolate the data. This means that the used learning algorithm induces an implicit bias on the chosen network. This paper aims at shedding some light on the nature of such implicit bias in a certain simpified setting of linear networks, i.e., deep matrix factorizations. We provide a rigorous analysis of the dynamics of vanilla gradient descent. We characterize the dynamical behaviour of ground-truth eigenvectors and convergence of the corresponding eigenvalues to the true ones. As a consequence, for exactly characterized time intervals, the effective rank of gradient descent iterates is provably close to the effective rank of a low-rank projection of the ground-truth matrix, such that early stopping of gradient descent produces regularized solutions that may be used for denoising, for instance. In particular, apart from few initial steps of the iterations, the effective rank of our matrix is monotonically increasing, suggesting that "matrix factorization implicitly enforces gradient descent to take a route in which the effective rank is monotone". Since empirical observations in more general scenarios such as matrix sensing show a similar phenomenon, we believe that our theoretical results help understanding the still mysterious "implicit bias" of gradient descent in deep learning.
View on arXiv