5
0

Flat Channels to Infinity in Neural Loss Landscapes

Flavio Martinelli
Alexander Van Meegen
Berfin Şimşek
Wulfram Gerstner
Johanni Brea
Main:9 Pages
24 Figures
Bibliography:4 Pages
1 Tables
Appendix:13 Pages
Abstract

The loss landscapes of neural networks contain minima and saddle points that may be connected in flat regions or appear in isolation. We identify and characterize a special structure in the loss landscape: channels along which the loss decreases extremely slowly, while the output weights of at least two neurons, aia_i and aja_j, diverge to ±\pminfinity, and their input weight vectors, wi\mathbf{w_i} and wj\mathbf{w_j}, become equal to each other. At convergence, the two neurons implement a gated linear unit: aiσ(wix)+ajσ(wjx)σ(wx)+(vx)σ(wx)a_i\sigma(\mathbf{w_i} \cdot \mathbf{x}) + a_j\sigma(\mathbf{w_j} \cdot \mathbf{x}) \rightarrow \sigma(\mathbf{w} \cdot \mathbf{x}) + (\mathbf{v} \cdot \mathbf{x}) \sigma'(\mathbf{w} \cdot \mathbf{x}). Geometrically, these channels to infinity are asymptotically parallel to symmetry-induced lines of critical points. Gradient flow solvers, and related optimization methods like SGD or ADAM, reach the channels with high probability in diverse regression settings, but without careful inspection they look like flat local minima with finite parameter values. Our characterization provides a comprehensive picture of these quasi-flat regions in terms of gradient dynamics, geometry, and functional interpretation. The emergence of gated linear units at the end of the channels highlights a surprising aspect of the computational capabilities of fully connected layers.

View on arXiv
@article{martinelli2025_2506.14951,
  title={ Flat Channels to Infinity in Neural Loss Landscapes },
  author={ Flavio Martinelli and Alexander Van Meegen and Berfin Şimşek and Wulfram Gerstner and Johanni Brea },
  journal={arXiv preprint arXiv:2506.14951},
  year={ 2025 }
}
Comments on this paper