55

Momentum Further Constrains Sharpness at the Edge of Stochastic Stability

Arseniy Andreyev
Advikar Ananthkumar
Marc Walden
Tomaso Poggio
Pierfrancesco Beneventano
Main:9 Pages
40 Figures
Bibliography:3 Pages
Appendix:28 Pages
Abstract

Recent work suggests that (stochastic) gradient descent self-organizes near an instability boundary, shaping both optimization and the solutions found. Momentum and mini-batch gradients are widely used in practical deep learning optimization, but it remains unclear whether they operate in a comparable regime of instability. We demonstrate that SGD with momentum exhibits an Edge of Stochastic Stability (EoSS)-like regime with batch-size-dependent behavior that cannot be explained by a single momentum-adjusted stability threshold. Batch Sharpness (the expected directional mini-batch curvature) stabilizes in two distinct regimes: at small batch sizes it converges to a lower plateau 2(1β)/η2(1-\beta)/\eta, reflecting amplification of stochastic fluctuations by momentum and favoring flatter regions than vanilla SGD; at large batch sizes it converges to a higher plateau 2(1+β)/η2(1+\beta)/\eta, where momentum recovers its classical stabilizing effect and favors sharper regions consistent with full-batch dynamics. We further show that this aligns with linear stability thresholds and discuss the implications for hyperparameter tuning and coupling.

View on arXiv
Comments on this paper