Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.08084
Cited By
Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization
15 October 2021
Francis R. Bach
Lénaïc Chizat
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization"
6 / 6 papers shown
Title
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Raphael Barboni
Gabriel Peyré
François-Xavier Vialard
MLT
39
0
0
25 Apr 2025
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
38
31
0
12 Feb 2023
Infinite-width limit of deep linear neural networks
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
31
14
0
29 Nov 2022
Intrinsic dimensionality and generalization properties of the
R
\mathcal{R}
R
-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks
R. Veiga
Ludovic Stephan
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
MLT
13
31
0
01 Feb 2022
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
74
45
0
04 Feb 2021
1