ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.06757
  4. Cited By
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures

Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures

13 December 2022
Antione Bodin
N. Macris
ArXivPDFHTML

Papers citing "Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures"

9 / 9 papers shown
Title
Towards understanding epoch-wise double descent in two-layer linear
  neural networks
Towards understanding epoch-wise double descent in two-layer linear neural networks
Amanda Olmin
Fredrik Lindsten
MLT
27
3
0
13 Jul 2024
Information limits and Thouless-Anderson-Palmer equations for spiked
  matrix models with structured noise
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise
Jean Barbier
Francesco Camilli
Marco Mondelli
Yizhou Xu
25
2
0
31 May 2024
Grokking in Linear Estimators -- A Solvable Model that Groks without
  Understanding
Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding
Noam Levi
Alon Beck
Yohai Bar-Sinai
26
16
0
25 Oct 2023
Gradient flow on extensive-rank positive semi-definite matrix denoising
Gradient flow on extensive-rank positive semi-definite matrix denoising
A. Bodin
N. Macris
28
3
0
16 Mar 2023
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product
  Kernel Regression
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Lechao Xiao
Hong Hu
Theodor Misiakiewicz
Yue M. Lu
Jeffrey Pennington
59
18
0
30 May 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
51
15
0
13 May 2022
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
136
201
0
07 Feb 2020
Cleaning large correlation matrices: tools from random matrix theory
Cleaning large correlation matrices: tools from random matrix theory
J. Bun
J. Bouchaud
M. Potters
29
262
0
25 Oct 2016
1