ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.02561
  4. Cited By
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks

Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

7 February 2020
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
ArXivPDFHTML

Papers citing "Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks"

42 / 42 papers shown
Title
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Francesco Cagnetta
Alessandro Favero
Antonio Sclocchi
M. Wyart
26
0
0
11 May 2025
Learning curves theory for hierarchically compositional data with power-law distributed features
Learning curves theory for hierarchically compositional data with power-law distributed features
Francesco Cagnetta
Hyunmo Kang
M. Wyart
36
0
0
11 May 2025
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
106
0
0
06 May 2025
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion models
John J. Vastola
DiffM
141
1
0
16 Apr 2025
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
54
12
0
26 Sep 2024
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Marko Medvedev
Gal Vardi
Nathan Srebro
62
3
0
05 Sep 2024
Restoring balance: principled under/oversampling of data for optimal classification
Restoring balance: principled under/oversampling of data for optimal classification
Emanuele Loffredo
Mauro Pastore
Simona Cocco
R. Monasson
37
9
0
15 May 2024
Characterizing Overfitting in Kernel Ridgeless Regression Through the
  Eigenspectrum
Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum
Tin Sum Cheng
Aurélien Lucchi
Anastasis Kratsios
David Belius
34
7
0
02 Feb 2024
Modify Training Directions in Function Space to Reduce Generalization
  Error
Modify Training Directions in Function Space to Reduce Generalization Error
Yi Yu
Wenlian Lu
Boyu Chen
19
0
0
25 Jul 2023
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained
  Models
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
Guillermo Ortiz-Jiménez
Alessandro Favero
P. Frossard
MoMe
42
103
0
22 May 2023
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Chanwoo Chun
Daniel D. Lee
BDL
33
2
0
17 May 2023
Do deep neural networks have an inbuilt Occam's razor?
Do deep neural networks have an inbuilt Occam's razor?
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
19
15
0
13 Apr 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
On the Optimality of Misspecified Spectral Algorithms
On the Optimality of Misspecified Spectral Algorithms
Hao Zhang
Yicheng Li
Qian Lin
16
14
0
27 Mar 2023
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
A Solvable Model of Neural Scaling Laws
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
31
51
0
30 Oct 2022
On Kernel Regression with Data-Dependent Kernels
On Kernel Regression with Data-Dependent Kernels
James B. Simon
BDL
13
3
0
04 Sep 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Target alignment in truncated kernel ridge regression
Target alignment in truncated kernel ridge regression
Arash A. Amini
R. Baumgartner
Dai Feng
9
3
0
28 Jun 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
29
23
0
24 Jun 2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural
  Networks
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Kaiqi Zhang
Ming Yin
Yu-Xiang Wang
MQ
16
4
0
13 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
24
79
0
19 May 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
51
15
0
13 May 2022
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
  Matrices with Polynomial Scalings
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
19
24
0
12 May 2022
Learning curves for the multi-class teacher-student perceptron
Learning curves for the multi-class teacher-student perceptron
Elisabetta Cornacchia
Francesca Mignacco
R. Veiga
Cédric Gerbelot
Bruno Loureiro
Lenka Zdeborová
12
21
0
22 Mar 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Yaoyu Zhang
Tao Luo
FaML
25
65
0
19 Jan 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Z. Ringel
30
49
0
31 Dec 2021
Learning Curves for Continual Learning in Neural Networks:
  Self-Knowledge Transfer and Forgetting
Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting
Ryo Karakida
S. Akaho
CLL
24
11
0
03 Dec 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
C. Pehlevan
MLT
22
74
0
29 Oct 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
24
31
0
16 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Z. Ringel
SSL
MLT
23
31
0
08 Jun 2021
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label
  Complexity
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity
Seo Taek Kong
Soomin Jeon
Dongbin Na
Jaewon Lee
Honglak Lee
Kyu-Hwan Jung
15
6
0
08 Apr 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
25
135
0
16 Feb 2021
Learning Curve Theory
Learning Curve Theory
Marcus Hutter
132
58
0
08 Feb 2021
Frequency Principle in Deep Learning Beyond Gradient-descent-based
  Training
Frequency Principle in Deep Learning Beyond Gradient-descent-based Training
Yuheng Ma
Zhi-Qin John Xu
Jiwei Zhang
19
7
0
04 Jan 2021
On 1/n neural representation and robustness
On 1/n neural representation and robustness
Josue Nassar
Piotr A. Sokól
SueYeon Chung
K. Harris
Il Memming Park
AAML
OOD
16
23
0
08 Dec 2020
Fourier-domain Variational Formulation and Its Well-posedness for
  Supervised Learning
Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning
Tao Luo
Zheng Ma
Zhiwei Wang
Zhi-Qin John Xu
Yaoyu Zhang
OOD
27
4
0
06 Dec 2020
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Yibo Jiang
C. Pehlevan
11
13
0
30 Jun 2020
An analytic theory of shallow networks dynamics for hinge loss
  classification
An analytic theory of shallow networks dynamics for hinge loss classification
Franco Pellegrini
Giulio Biroli
19
19
0
19 Jun 2020
Fourier Features Let Networks Learn High Frequency Functions in Low
  Dimensional Domains
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Matthew Tancik
Pratul P. Srinivasan
B. Mildenhall
Sara Fridovich-Keil
N. Raghavan
Utkarsh Singhal
R. Ramamoorthi
Jonathan T. Barron
Ren Ng
60
2,335
0
18 Jun 2020
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Arthur Jacot
Berfin cSimcsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
17
66
0
17 Jun 2020
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
23
214
0
03 Dec 2019
1