ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11477
  4. Cited By
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error

Conditioning of Random Feature Matrices: Double Descent and Generalization Error

21 October 2021
Zhijun Chen
Hayden Schaeffer
ArXivPDFHTML

Papers citing "Conditioning of Random Feature Matrices: Double Descent and Generalization Error"

29 / 29 papers shown
Title
Generalization Bounds for Sparse Random Feature Expansions
Generalization Bounds for Sparse Random Feature Expansions
Abolfazl Hashemi
Hayden Schaeffer
Robert Shi
Ufuk Topcu
Giang Tran
Rachel A. Ward
MLT
122
41
0
04 Mar 2021
Generalization error of random features and kernel methods:
  hypercontractivity and kernel matrix concentration
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Song Mei
Theodor Misiakiewicz
Andrea Montanari
71
111
0
26 Jan 2021
Avoiding The Double Descent Phenomenon of Random Feature Models Using
  Hybrid Regularization
Avoiding The Double Descent Phenomenon of Random Feature Models Using Hybrid Regularization
Kelvin K. Kan
J. Nagy
Lars Ruthotto
AI4CE
61
6
0
11 Dec 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
66
166
0
29 Sep 2020
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
69
134
0
22 Sep 2020
The Slow Deterioration of the Generalization Error of the Random Feature
  Model
The Slow Deterioration of the Generalization Error of the Random Feature Model
Chao Ma
Lei Wu
E. Weinan
68
15
0
13 Aug 2020
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
  Kernel, a Precise Phase Transition, and the Corresponding Double Descent
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent
Zhenyu Liao
Romain Couillet
Michael W. Mahoney
69
89
0
09 Jun 2020
On Random Matrices Arising in Deep Neural Networks. Gaussian Case
On Random Matrices Arising in Deep Neural Networks. Gaussian Case
L. Pastur
46
23
0
17 Jan 2020
Double descent in the condition number
Double descent in the condition number
T. Poggio
Gil Kur
Andy Banburski
42
27
0
12 Dec 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
83
634
0
14 Aug 2019
NeuPDE: Neural Network Based Ordinary and Partial Differential Equations
  for Modeling Time-Dependent Data
NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data
Yifan Sun
Linan Zhang
Hayden Schaeffer
AI4TS
52
91
0
08 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
67
777
0
26 Jun 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
165
743
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
87
376
0
18 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
213
1,638
0
28 Dec 2018
Forward Stability of ResNet and Its Variants
Forward Stability of ResNet and Its Variants
Linan Zhang
Hayden Schaeffer
90
47
0
24 Nov 2018
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Tengyuan Liang
Alexander Rakhlin
60
353
0
01 Aug 2018
Does data interpolation contradict statistical optimality?
Does data interpolation contradict statistical optimality?
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
71
219
0
25 Jun 2018
On the Spectrum of Random Features Maps of High Dimensional Data
On the Spectrum of Random Features Maps of High Dimensional Data
Zhenyu Liao
Romain Couillet
46
51
0
30 May 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
210
3,457
0
09 Mar 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
57
418
0
05 Feb 2018
Resurrecting the sigmoid in deep learning through dynamical isometry:
  theory and practice
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
ODL
43
252
0
13 Nov 2017
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
128
469
0
10 Oct 2017
Stable Architectures for Deep Neural Networks
Stable Architectures for Deep Neural Networks
E. Haber
Lars Ruthotto
128
727
0
09 May 2017
A Random Matrix Approach to Neural Networks
A Random Matrix Approach to Neural Networks
Cosme Louart
Zhenyu Liao
Romain Couillet
62
161
0
17 Feb 2017
Generalization Properties of Learning with Random Features
Generalization Properties of Learning with Random Features
Alessandro Rudi
Lorenzo Rosasco
MLT
68
331
0
14 Feb 2016
The Spectral Norm of Random Inner-Product Kernel Matrices
The Spectral Norm of Random Inner-Product Kernel Matrices
Z. Fan
Andrea Montanari
96
47
0
19 Jul 2015
Less is More: Nyström Computational Regularization
Less is More: Nyström Computational Regularization
Alessandro Rudi
Raffaello Camoriano
Lorenzo Rosasco
43
277
0
16 Jul 2015
The spectrum of kernel random matrices
The spectrum of kernel random matrices
N. Karoui
149
224
0
04 Jan 2010
1