26
1

On the Interaction of Noise, Compression Role, and Adaptivity under (L0,L1)(L_0, L_1)-Smoothness: An SDE-based Approach

Main:5 Pages
1 Figures
Bibliography:3 Pages
Appendix:10 Pages
Abstract

Using stochastic differential equation (SDE) approximations, we study the dynamics of Distributed SGD, Distributed Compressed SGD, and Distributed SignSGD under (L0,L1)(L_0,L_1)-smoothness and flexible noise assumptions. Our analysis provides insights -- which we validate through simulation -- into the intricate interactions between batch noise, stochastic gradient compression, and adaptivity in this modern theoretical setup. For instance, we show that \textit{adaptive} methods such as Distributed SignSGD can successfully converge under standard assumptions on the learning rate scheduler, even under heavy-tailed noise. On the contrary, Distributed (Compressed) SGD with pre-scheduled decaying learning rate fails to achieve convergence, unless such a schedule also accounts for an inverse dependency on the gradient norm -- de facto falling back into an adaptive method.

View on arXiv
@article{compagnoni2025_2506.00181,
  title={ On the Interaction of Noise, Compression Role, and Adaptivity under $(L_0, L_1)$-Smoothness: An SDE-based Approach },
  author={ Enea Monzio Compagnoni and Rustem Islamov and Antonio Orvieto and Eduard Gorbunov },
  journal={arXiv preprint arXiv:2506.00181},
  year={ 2025 }
}
Comments on this paper