Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.11805
Cited By
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
22 October 2021
A. Bodin
N. Macris
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model"
14 / 14 papers shown
Title
Towards understanding epoch-wise double descent in two-layer linear neural networks
Amanda Olmin
Fredrik Lindsten
MLT
27
3
0
13 Jul 2024
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise
Jean Barbier
Francesco Camilli
Marco Mondelli
Yizhou Xu
25
2
0
31 May 2024
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features
Rodrigo Veiga
Anastasia Remizova
Nicolas Macris
32
0
0
12 Feb 2024
A Dynamical Model of Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
49
36
0
02 Feb 2024
Understanding the Role of Optimization in Double Descent
Chris Liu
Jeffrey Flanigan
32
0
0
06 Dec 2023
Gradient flow on extensive-rank positive semi-definite matrix denoising
A. Bodin
N. Macris
31
3
0
16 Mar 2023
Learning time-scales in two-layers neural networks
Raphael Berthier
Andrea Montanari
Kangjie Zhou
36
33
0
28 Feb 2023
Deterministic equivalent and error universality of deep random features learning
Dominik Schröder
Hugo Cui
Daniil Dmitriev
Bruno Loureiro
MLT
29
28
0
01 Feb 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
31
121
0
03 May 2022
High-dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models
Tengyuan Liang
Subhabrata Sen
Pragya Sur
34
7
0
09 Apr 2022
Generalizing similarity in noisy setups: the DIBS phenomenon
Nayara Fonseca
V. Guidetti
13
0
0
30 Jan 2022
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
231
4,460
0
23 Jan 2020
1