Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.09349
Cited By
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
25 September 2018
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
Marco Baity-Jesi
Giulio Biroli
M. Wyart
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The jamming transition as a paradigm to understand the loss landscape of deep neural networks"
28 / 28 papers shown
Title
Investigating the Impact of Model Complexity in Large Language Models
Jing Luo
Huiyuan Wang
Weiran Huang
39
0
0
01 Oct 2024
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
94
2
0
08 Jul 2024
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
27
1
0
08 Jun 2023
Subspace-Configurable Networks
Dong Wang
O. Saukh
Xiaoxi He
Lothar Thiele
OOD
38
0
0
22 May 2023
Can we avoid Double Descent in Deep Neural Networks?
Victor Quétu
Enzo Tartaglione
AI4CE
20
3
0
26 Feb 2023
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Mårten Björkman
33
7
0
28 Jan 2023
REPAIR: REnormalizing Permuted Activations for Interpolation Repair
Keller Jordan
Hanie Sedghi
O. Saukh
R. Entezari
Behnam Neyshabur
MoMe
46
94
0
15 Nov 2022
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
47
51
0
30 Oct 2022
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Mårten Björkman
Hossein Azizpour
63
11
0
21 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
50
1
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
39
217
0
12 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
36
1
0
27 Sep 2021
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
23
41
0
26 Oct 2020
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
35
61
0
03 Aug 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Double Descent Risk and Volume Saturation Effects: A Geometric Perspective
Prasad Cheema
M. Sugiyama
14
3
0
08 Jun 2020
Is deeper better? It depends on locality of relevant features
Takashi Mori
Masahito Ueda
OOD
25
4
0
26 May 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
43
149
0
16 May 2020
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
27
168
0
19 Dec 2019
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
AI4CE
40
64
0
09 Dec 2019
From complex to simple : hierarchical free-energy landscape renormalized in deep neural networks
H. Yoshino
22
6
0
22 Oct 2019
Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians
Vardan Papyan
30
87
0
24 Jan 2019
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks
Umut Simsekli
Levent Sagun
Mert Gurbuzbalaban
26
237
0
18 Jan 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
52
195
0
06 Jan 2019
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
186
1,186
0
30 Nov 2014
1