4
0

Emergence and scaling laws in SGD learning of shallow neural networks

Abstract

We study the complexity of online stochastic gradient descent (SGD) for learning a two-layer neural network with PP neurons on isotropic Gaussian data: f(x)=p=1Papσ(x,vp)f_*(\boldsymbol{x}) = \sum_{p=1}^P a_p\cdot \sigma(\langle\boldsymbol{x},\boldsymbol{v}_p^*\rangle), xN(0,Id)\boldsymbol{x} \sim \mathcal{N}(0,\boldsymbol{I}_d), where the activation σ:RR\sigma:\mathbb{R}\to\mathbb{R} is an even function with information exponent k>2k_*>2 (defined as the lowest degree in the Hermite expansion), {vp}p[P]Rd\{\boldsymbol{v}^*_p\}_{p\in[P]}\subset \mathbb{R}^d are orthonormal signal directions, and the non-negative second-layer coefficients satisfy pap2=1\sum_{p} a_p^2=1. We focus on the challenging ``extensive-width'' regime P1P\gg 1 and permit diverging condition number in the second-layer, covering as a special case the power-law scaling appβa_p\asymp p^{-\beta} where βR0\beta\in\mathbb{R}_{\ge 0}. We provide a precise analysis of SGD dynamics for the training of a student two-layer network to minimize the mean squared error (MSE) objective, and explicitly identify sharp transition times to recover each signal direction. In the power-law setting, we characterize scaling law exponents for the MSE loss with respect to the number of training samples and SGD steps, as well as the number of parameters in the student neural network. Our analysis entails that while the learning of individual teacher neurons exhibits abrupt transitions, the juxtaposition of P1P\gg 1 emergent learning curves at different timescales leads to a smooth scaling law in the cumulative objective.

View on arXiv
@article{ren2025_2504.19983,
  title={ Emergence and scaling laws in SGD learning of shallow neural networks },
  author={ Yunwei Ren and Eshaan Nichani and Denny Wu and Jason D. Lee },
  journal={arXiv preprint arXiv:2504.19983},
  year={ 2025 }
}
Comments on this paper