ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.13395
37
19
v1v2v3v4v5 (latest)

Accelerating Stochastic Training for Over-parametrized Learning

31 October 2018
Chaoyue Liu
M. Belkin
    ODL
ArXiv (abs)PDFHTML
Abstract

We introduce MaSS (Momentum-added Stochastic Solver), an accelerated SGD method for optimizing over-parametrized models. Our method is simple and efficient to implement and does not require adapting hyper-parameters or computing full gradients in the course of optimization. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over Adam and SGD both in optimization and generalization. We prove accelerated convergence of MaSS over SGD and provide analysis for hyper-parameter selection in the quadratic case as well as some results in general strongly convex setting. In contrast, we show theoretically and verify empirically that the standard SGD+Nesterov can diverge for common choices of hyper-parameter values. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters as functions of the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation.

View on arXiv
Comments on this paper