ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.14846
  4. Cited By
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product
  Kernel Regression

Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression

30 May 2022
Lechao Xiao
Hong Hu
Theodor Misiakiewicz
Yue M. Lu
Jeffrey Pennington
ArXivPDFHTML

Papers citing "Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression"

31 / 31 papers shown
Title
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
68
15
0
13 May 2022
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
  Matrices with Polynomial Scalings
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
44
25
0
12 May 2022
Spectrum of inner-product kernel matrices in the polynomial regime and
  multiple descent phenomenon in kernel ridge regression
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Theodor Misiakiewicz
41
40
0
21 Apr 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILM
LRM
433
6,222
0
05 Apr 2022
Eigenspace Restructuring: a Principle of Space and Frequency in Neural
  Networks
Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks
Lechao Xiao
76
22
0
10 Dec 2021
Covariate Shift in High-Dimensional Random Feature Regression
Covariate Shift in High-Dimensional Random Feature Regression
Nilesh Tripuraneni
Ben Adlam
Jeffrey Pennington
OOD
40
24
0
16 Nov 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
95
138
0
16 Feb 2021
Generalization error of random features and kernel methods:
  hypercontractivity and kernel matrix concentration
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Song Mei
Theodor Misiakiewicz
Andrea Montanari
76
111
0
26 Jan 2021
Why do classifier accuracies show linear trends under distribution
  shift?
Why do classifier accuracies show linear trends under distribution shift?
Horia Mania
S. Sra
OOD
61
19
0
31 Dec 2020
Feature Learning in Infinite-Width Neural Networks
Feature Learning in Infinite-Width Neural Networks
Greg Yang
J. E. Hu
MLT
73
153
0
30 Nov 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
67
93
0
04 Nov 2020
What causes the test error? Going beyond bias-variance via ANOVA
What causes the test error? Going beyond bias-variance via ANOVA
Licong Lin
Yan Sun
47
34
0
11 Oct 2020
The Neural Tangent Kernel in High Dimensions: Triple Descent and a
  Multi-Scale Theory of Generalization
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
49
125
0
15 Aug 2020
The Gaussian equivalence of generative models for learning with shallow
  neural networks
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
72
103
0
25 Jun 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
105
136
0
25 Jun 2020
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel
  Regression and Infinitely Wide Neural Networks
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
94
187
0
23 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
65
122
0
10 Jun 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
145
152
0
02 Mar 2020
Generalisation error in learning with random features and the hidden
  manifold model
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
62
169
0
21 Feb 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
205
206
0
07 Feb 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
543
4,797
0
23 Jan 2020
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
67
228
0
05 Dec 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
83
634
0
14 Aug 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
45
243
0
27 Apr 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
209
922
0
26 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
176
743
0
19 Mar 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
196
1,099
0
18 Feb 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
221
1,638
0
28 Dec 2018
Bayesian Deep Convolutional Networks with Many Channels are Gaussian
  Processes
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
Roman Novak
Lechao Xiao
Jaehoon Lee
Yasaman Bahri
Greg Yang
Jiri Hron
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
54
309
0
11 Oct 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
252
3,194
0
20 Jun 2018
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
128
469
0
10 Oct 2017
1