In this note we give a simple proof for the convergence of stochastic gradient (SGD) methods on -strongly convex functions under a (milder than standard) -smoothness assumption. We show that SGD converges after iterations as where measures the variance. For deterministic gradient descent (GD) and SGD in the interpolation setting we have and we recover the exponential convergence rate. The bound matches with the best known iteration complexity of GD and SGD, up to constants.
View on arXiv