Emergence and scaling laws in SGD learning of shallow neural networks

We study the complexity of online stochastic gradient descent (SGD) for learning a two-layer neural network with neurons on isotropic Gaussian data: , , where the activation is an even function with information exponent (defined as the lowest degree in the Hermite expansion), are orthonormal signal directions, and the non-negative second-layer coefficients satisfy . We focus on the challenging ``extensive-width'' regime and permit diverging condition number in the second-layer, covering as a special case the power-law scaling where . We provide a precise analysis of SGD dynamics for the training of a student two-layer network to minimize the mean squared error (MSE) objective, and explicitly identify sharp transition times to recover each signal direction. In the power-law setting, we characterize scaling law exponents for the MSE loss with respect to the number of training samples and SGD steps, as well as the number of parameters in the student neural network. Our analysis entails that while the learning of individual teacher neurons exhibits abrupt transitions, the juxtaposition of emergent learning curves at different timescales leads to a smooth scaling law in the cumulative objective.
View on arXiv@article{ren2025_2504.19983, title={ Emergence and scaling laws in SGD learning of shallow neural networks }, author={ Yunwei Ren and Eshaan Nichani and Denny Wu and Jason D. Lee }, journal={arXiv preprint arXiv:2504.19983}, year={ 2025 } }