ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.05355
  4. Cited By
The generalization error of random features regression: Precise
  asymptotics and double descent curve
v1v2v3v4v5 (latest)

The generalization error of random features regression: Precise asymptotics and double descent curve

14 August 2019
Song Mei
Andrea Montanari
ArXiv (abs)PDFHTML

Papers citing "The generalization error of random features regression: Precise asymptotics and double descent curve"

50 / 227 papers shown
Title
Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
  Processes
Monotonicity and Double Descent in Uncertainty Estimation with Gaussian Processes
Liam Hodgkinson
Christopher van der Heide
Fred Roosta
Michael W. Mahoney
UQCV
72
6
0
14 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
96
8
0
19 Sep 2022
Importance Tempering: Group Robustness for Overparameterized Models
Importance Tempering: Group Robustness for Overparameterized Models
Yiping Lu
Wenlong Ji
Zachary Izzo
Lexing Ying
87
7
0
19 Sep 2022
Small Transformers Compute Universal Metric Embeddings
Small Transformers Compute Universal Metric Embeddings
Anastasis Kratsios
Valentin Debarnot
Ivan Dokmanić
126
11
0
14 Sep 2022
Generalisation under gradient descent via deterministic PAC-Bayes
Generalisation under gradient descent via deterministic PAC-Bayes
Eugenio Clerico
Tyler Farghly
George Deligiannidis
Benjamin Guedj
Arnaud Doucet
152
4
0
06 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
68
6
0
27 Aug 2022
Investigating the Impact of Model Width and Density on Generalization in
  Presence of Label Noise
Investigating the Impact of Model Width and Density on Generalization in Presence of Label Noise
Yihao Xue
Kyle Whitecross
Baharan Mirzasoleiman
NoLa
72
1
0
17 Aug 2022
A Universal Trade-off Between the Model Size, Test Loss, and Training
  Loss of Linear Predictors
A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors
Nikhil Ghosh
M. Belkin
74
7
0
23 Jul 2022
How does overparametrization affect performance on minority groups?
How does overparametrization affect performance on minority groups?
Subha Maity
Saptarshi Roy
Songkai Xue
Mikhail Yurochkin
Yuekai Sun
62
3
0
07 Jun 2022
Generalization for multiclass classification with overparameterized
  linear models
Generalization for multiclass classification with overparameterized linear models
Vignesh Subramanian
Rahul Arya
A. Sahai
AI4CE
73
9
0
03 Jun 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
79
11
0
03 Jun 2022
Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence
  in High Dimensions
Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High Dimensions
Kiwon Lee
Andrew N. Cheng
Courtney Paquette
Elliot Paquette
87
14
0
02 Jun 2022
Optimal Activation Functions for the Random Features Regression Model
Optimal Activation Functions for the Random Features Regression Model
Jianxin Wang
José Bento
63
3
0
31 May 2022
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product
  Kernel Regression
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Lechao Xiao
Hong Hu
Theodor Misiakiewicz
Yue M. Lu
Jeffrey Pennington
125
20
0
30 May 2022
Gaussian Universality of Perceptrons with Random Labels
Gaussian Universality of Perceptrons with Random Labels
Federica Gerace
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
Lenka Zdeborová
104
24
0
26 May 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
92
16
0
13 May 2022
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
  Matrices with Polynomial Scalings
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
65
26
0
12 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
96
129
0
03 May 2022
Ridgeless Regression with Random Features
Ridgeless Regression with Random Features
Jian Li
Yong-Jin Liu
Yingying Zhang
39
2
0
01 May 2022
Spectrum of inner-product kernel matrices in the polynomial regime and
  multiple descent phenomenon in kernel ridge regression
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Theodor Misiakiewicz
54
40
0
21 Apr 2022
Concentration of Random Feature Matrices in High-Dimensions
Concentration of Random Feature Matrices in High-Dimensions
Zhijun Chen
Hayden Schaeffer
Rachel A. Ward
90
6
0
14 Apr 2022
On the (Non-)Robustness of Two-Layer Neural Networks in Different
  Learning Regimes
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
75
13
0
22 Mar 2022
More Than a Toy: Random Matrix Models Predict How Real-World Neural
  Representations Generalize
More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize
Alexander Wei
Wei Hu
Jacob Steinhardt
109
72
0
11 Mar 2022
Bias-variance decomposition of overparameterized regression with random
  linear features
Bias-variance decomposition of overparameterized regression with random linear features
J. Rocks
Pankaj Mehta
67
12
0
10 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
136
8
0
07 Mar 2022
Estimation under Model Misspecification with Fake Features
Estimation under Model Misspecification with Fake Features
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
33
11
0
07 Mar 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDLMLT
133
28
0
01 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform
  Out-of-Distribution
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
135
687
0
21 Feb 2022
Memorize to Generalize: on the Necessity of Interpolation in High
  Dimensional Linear Regression
Memorize to Generalize: on the Necessity of Interpolation in High Dimensional Linear Regression
Chen Cheng
John C. Duchi
Rohith Kuditipudi
56
12
0
20 Feb 2022
Interpolation and Regularization for Causal Learning
Interpolation and Regularization for Causal Learning
L. C. Vankadara
Luca Rendsburg
U. V. Luxburg
Debarghya Ghoshdastidar
CML
50
1
0
18 Feb 2022
Universality of empirical risk minimization
Universality of empirical risk minimization
Andrea Montanari
Basil Saeed
OOD
82
78
0
17 Feb 2022
Benign Overfitting in Two-layer Convolutional Neural Networks
Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao
Zixiang Chen
M. Belkin
Quanquan Gu
MLT
93
90
0
14 Feb 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
110
75
0
11 Feb 2022
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of
  Flat Regions in the Landscape Geometry
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry
Fabrizio Pittorino
Antonio Ferraro
Gabriele Perugini
Christoph Feinauer
Carlo Baldassi
R. Zecchina
263
26
0
07 Feb 2022
HARFE: Hard-Ridge Random Feature Expansion
HARFE: Hard-Ridge Random Feature Expansion
Esha Saha
Hayden Schaeffer
Giang Tran
119
15
0
06 Feb 2022
Data-driven emergence of convolutional structure in neural networks
Data-driven emergence of convolutional structure in neural networks
Alessandro Ingrosso
Sebastian Goldt
120
38
0
01 Feb 2022
Towards Sample-efficient Overparameterized Meta-learning
Towards Sample-efficient Overparameterized Meta-learning
Yue Sun
Adhyyan Narang
Halil Ibrahim Gulluk
Samet Oymak
Maryam Fazel
BDL
60
25
0
16 Jan 2022
On generalization bounds for deep networks based on loss surface
  implicit regularization
On generalization bounds for deep networks based on loss surface implicit regularization
Masaaki Imaizumi
Johannes Schmidt-Hieber
ODL
68
3
0
12 Jan 2022
The dynamics of representation learning in shallow, non-linear
  autoencoders
The dynamics of representation learning in shallow, non-linear autoencoders
Maria Refinetti
Sebastian Goldt
AI4CE
63
17
0
06 Jan 2022
The Effect of Model Size on Worst-Group Generalization
The Effect of Model Size on Worst-Group Generalization
Alan Pham
Eunice Chan
V. Srivatsa
Dhruba Ghosh
Yaoqing Yang
Yaodong Yu
Ruiqi Zhong
Joseph E. Gonzalez
Jacob Steinhardt
62
5
0
08 Dec 2021
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wei Cao
Zhenguo Li
UQCVAAML
86
19
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
105
27
0
06 Dec 2021
Tight bounds for minimum l1-norm interpolation of noisy data
Tight bounds for minimum l1-norm interpolation of noisy data
Guillaume Wang
Konstantin Donhauser
Fanny Yang
137
20
0
10 Nov 2021
Harmless interpolation in regression and classification with structured
  features
Harmless interpolation in regression and classification with structured features
Andrew D. McRae
Santhosh Karnik
Mark A. Davenport
Vidya Muthukumar
184
11
0
09 Nov 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
123
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
109
12
0
21 Oct 2021
On the Double Descent of Random Features Models Trained with SGD
On the Double Descent of Random Features Models Trained with SGD
Fanghui Liu
Johan A. K. Suykens
Volkan Cevher
MLT
101
10
0
13 Oct 2021
Learning through atypical "phase transitions" in overparameterized
  neural networks
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
88
27
0
01 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILMAAML
69
1
0
27 Sep 2021
Deformed semicircle law and concentration of nonlinear random matrices
  for ultra-wide neural networks
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
104
20
0
20 Sep 2021
Previous
12345
Next