Several recent empirical studies demonstrate that important machine learning tasks, e.g., training deep neural networks, exhibit low-rank structure, where the loss function varies significantly in only a few directions of the input space. In this paper, we leverage such low-rank structure to reduce the high computational cost of canonical gradient-based methods such as gradient descent (GD). Our proposed \emph{Low-Rank Gradient Descent} (LRGD) algorithm finds an -approximate stationary point of a -dimensional function by first identifying significant directions, and then estimating the true -dimensional gradient at every iteration by computing directional derivatives only along those directions. We establish that the "directional oracle complexities" of LRGD for strongly convex and non-convex objective functions are and , respectively. When , these complexities are smaller than the known complexities of and of {\gd} in the strongly convex and non-convex settings, respectively. Thus, LRGD significantly reduces the computational cost of gradient-based methods for sufficiently low-rank functions. In the course of our analysis, we also formally define and characterize the classes of exact and approximately low-rank functions.
View on arXiv