Flat Channels to Infinity in Neural Loss Landscapes

The loss landscapes of neural networks contain minima and saddle points that may be connected in flat regions or appear in isolation. We identify and characterize a special structure in the loss landscape: channels along which the loss decreases extremely slowly, while the output weights of at least two neurons, and , diverge to infinity, and their input weight vectors, and , become equal to each other. At convergence, the two neurons implement a gated linear unit: . Geometrically, these channels to infinity are asymptotically parallel to symmetry-induced lines of critical points. Gradient flow solvers, and related optimization methods like SGD or ADAM, reach the channels with high probability in diverse regression settings, but without careful inspection they look like flat local minima with finite parameter values. Our characterization provides a comprehensive picture of these quasi-flat regions in terms of gradient dynamics, geometry, and functional interpretation. The emergence of gated linear units at the end of the channels highlights a surprising aspect of the computational capabilities of fully connected layers.
View on arXiv@article{martinelli2025_2506.14951, title={ Flat Channels to Infinity in Neural Loss Landscapes }, author={ Flavio Martinelli and Alexander Van Meegen and Berfin Şimşek and Wulfram Gerstner and Johanni Brea }, journal={arXiv preprint arXiv:2506.14951}, year={ 2025 } }