41
0

Feature Learning Beyond the Edge of Stability

Abstract

We propose a homogeneous multilayer perceptron parameterization with polynomial hidden layer width pattern and analyze its training dynamics under stochastic gradient descent with depthwise gradient scaling in a general supervised learning scenario. We obtain formulas for the first three Taylor coefficients of the minibatch loss during training that illuminate the connection between sharpness and feature learning, providing in particular a soft rank variant that quantifies the quality of learned hidden layer features. Based on our theory, we design a gradient scaling scheme that in tandem with a quadratic width pattern enables training beyond the edge of stability without loss explosions or numerical errors, resulting in improved feature learning and implicit sharpness regularization as demonstrated empirically.

View on arXiv
@article{terjék2025_2502.13110,
  title={ Feature Learning Beyond the Edge of Stability },
  author={ Dávid Terjék },
  journal={arXiv preprint arXiv:2502.13110},
  year={ 2025 }
}
Comments on this paper