ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.06798
  4. Cited By
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime

Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime

13 May 2022
Hong Hu
Yue M. Lu
ArXivPDFHTML

Papers citing "Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime"

21 / 21 papers shown
Title
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
  Matrices with Polynomial Scalings
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
42
25
0
12 May 2022
Spectrum of inner-product kernel matrices in the polynomial regime and
  multiple descent phenomenon in kernel ridge regression
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Theodor Misiakiewicz
41
40
0
21 Apr 2022
Generalization error of random features and kernel methods:
  hypercontractivity and kernel matrix concentration
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Song Mei
Theodor Misiakiewicz
Andrea Montanari
71
111
0
26 Jan 2021
The Neural Tangent Kernel in High Dimensions: Triple Descent and a
  Multi-Scale Theory of Generalization
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
44
125
0
15 Aug 2020
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel
  Regression and Infinitely Wide Neural Networks
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
82
187
0
23 Jun 2020
Asymptotics of Ridge (less) Regression under General Source Condition
Asymptotics of Ridge (less) Regression under General Source Condition
Dominic Richards
Jaouad Mourtada
Lorenzo Rosasco
50
73
0
11 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
63
122
0
10 Jun 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
139
152
0
02 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
205
206
0
07 Feb 2020
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
119
935
0
04 Dec 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
80
634
0
14 Aug 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
45
243
0
27 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
153
743
0
19 Mar 2019
Consistency of Interpolation with Laplace Kernels is a High-Dimensional
  Phenomenon
Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon
Alexander Rakhlin
Xiyu Zhai
98
79
0
28 Dec 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
168
1,134
0
09 Nov 2018
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Tengyuan Liang
Alexander Rakhlin
58
353
0
01 Aug 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
224
3,191
0
20 Jun 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
55
418
0
05 Feb 2018
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
136
343
0
18 Feb 2016
Ridge regression and asymptotic minimax estimation over spheres of
  growing dimension
Ridge regression and asymptotic minimax estimation over spheres of growing dimension
Lee H. Dicker
69
75
0
15 Jan 2016
High-Dimensional Asymptotics of Prediction: Ridge Regression and
  Classification
High-Dimensional Asymptotics of Prediction: Ridge Regression and Classification
Yan Sun
Stefan Wager
85
289
0
10 Jul 2015
1