ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01396
  4. Cited By
To understand deep learning we need to understand kernel learning
v1v2v3 (latest)

To understand deep learning we need to understand kernel learning

5 February 2018
M. Belkin
Siyuan Ma
Soumik Mandal
ArXiv (abs)PDFHTML

Papers citing "To understand deep learning we need to understand kernel learning"

50 / 271 papers shown
Title
Deep Learning Generalization, Extrapolation, and Over-parameterization
Deep Learning Generalization, Extrapolation, and Over-parameterization
Roozbeh Yousefzadeh
31
1
0
19 Mar 2022
On the Generalization Mystery in Deep Learning
On the Generalization Mystery in Deep Learning
S. Chatterjee
Piotr Zielinski
OOD
77
35
0
18 Mar 2022
More Than a Toy: Random Matrix Models Predict How Real-World Neural
  Representations Generalize
More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize
Alexander Wei
Wei Hu
Jacob Steinhardt
112
72
0
11 Mar 2022
Chained Generalisation Bounds
Chained Generalisation Bounds
Eugenio Clerico
Amitis Shidani
George Deligiannidis
Arnaud Doucet
AI4CEFedML
76
13
0
02 Mar 2022
Memorize to Generalize: on the Necessity of Interpolation in High
  Dimensional Linear Regression
Memorize to Generalize: on the Necessity of Interpolation in High Dimensional Linear Regression
Chen Cheng
John C. Duchi
Rohith Kuditipudi
56
12
0
20 Feb 2022
Geometric Regularization from Overparameterization
Geometric Regularization from Overparameterization
Nicholas J. Teague
53
1
0
18 Feb 2022
Interpolation and Regularization for Causal Learning
Interpolation and Regularization for Causal Learning
L. C. Vankadara
Luca Rendsburg
U. V. Luxburg
Debarghya Ghoshdastidar
CML
55
1
0
18 Feb 2022
On the Origins of the Block Structure Phenomenon in Neural Network
  Representations
On the Origins of the Block Structure Phenomenon in Neural Network Representations
Thao Nguyen
M. Raghu
Simon Kornblith
93
13
0
15 Feb 2022
Benign Overfitting in Two-layer Convolutional Neural Networks
Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao
Zixiang Chen
M. Belkin
Quanquan Gu
MLT
93
90
0
14 Feb 2022
Learning Representation from Neural Fisher Kernel with Low-rank
  Approximation
Learning Representation from Neural Fisher Kernel with Low-rank Approximation
Ruixiang Zhang
Shuangfei Zhai
Etai Littwin
J. Susskind
SSL
72
3
0
04 Feb 2022
Faster Convergence of Local SGD for Over-Parameterized Models
Faster Convergence of Local SGD for Over-Parameterized Models
Tiancheng Qin
S. Rasoul Etesami
César A. Uribe
FedML
88
6
0
30 Jan 2022
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High
  Dimensions
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions
Mojtaba Sahraee-Ardakan
M. Emami
Parthe Pandit
S. Rangan
A. Fletcher
96
9
0
20 Jan 2022
Benign Overfitting in Adversarially Robust Linear Classification
Benign Overfitting in Adversarially Robust Linear Classification
Jinghui Chen
Yuan Cao
Quanquan Gu
AAMLSILM
78
11
0
31 Dec 2021
Over-Parametrized Matrix Factorization in the Presence of Spurious
  Stationary Points
Over-Parametrized Matrix Factorization in the Presence of Spurious Stationary Points
Armin Eftekhari
50
1
0
25 Dec 2021
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU
  Neural Networks
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks
T. Getu
54
2
0
25 Nov 2021
Importance of Kernel Bandwidth in Quantum Machine Learning
Importance of Kernel Bandwidth in Quantum Machine Learning
Ruslan Shaydulin
Stefan M. Wild
99
39
0
09 Nov 2021
Harmless interpolation in regression and classification with structured
  features
Harmless interpolation in regression and classification with structured features
Andrew D. McRae
Santhosh Karnik
Mark A. Davenport
Vidya Muthukumar
186
11
0
09 Nov 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
Cengiz Pehlevan
MLT
120
85
0
29 Oct 2021
Learning curves for Gaussian process regression with power-law priors
  and targets
Learning curves for Gaussian process regression with power-law priors and targets
Hui Jin
P. Banerjee
Guido Montúfar
70
18
0
23 Oct 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
126
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
112
12
0
21 Oct 2021
Learning in High Dimension Always Amounts to Extrapolation
Learning in High Dimension Always Amounts to Extrapolation
Randall Balestriero
J. Pesenti
Yann LeCun
129
104
0
18 Oct 2021
NFT-K: Non-Fungible Tangent Kernels
NFT-K: Non-Fungible Tangent Kernels
Sina Alemohammad
Hossein Babaei
C. Barberan
Naiming Liu
Lorenzo Luzi
Blake Mason
Richard G. Baraniuk
AAML
46
0
0
11 Oct 2021
Kernel Interpolation as a Bayes Point Machine
Kernel Interpolation as a Bayes Point Machine
Jeremy Bernstein
Alexander R. Farhang
Yisong Yue
BDL
71
4
0
08 Oct 2021
VC dimension of partially quantized neural networks in the
  overparametrized regime
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
81
1
0
06 Oct 2021
Spectral Bias in Practice: The Role of Function Frequency in
  Generalization
Spectral Bias in Practice: The Role of Function Frequency in Generalization
Sara Fridovich-Keil
Raphael Gontijo-Lopes
Rebecca Roelofs
111
30
0
06 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILMAAML
69
1
0
27 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
117
72
0
06 Sep 2021
When and how epochwise double descent happens
When and how epochwise double descent happens
Cory Stephenson
Tyler Lee
82
15
0
26 Aug 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
93
16
0
05 Aug 2021
Mitigating deep double descent by concatenating inputs
Mitigating deep double descent by concatenating inputs
John Chen
Qihan Wang
Anastasios Kyrillidis
BDL
37
3
0
02 Jul 2021
Assessing Generalization of SGD via Disagreement
Assessing Generalization of SGD via Disagreement
Yiding Jiang
Vaishnavh Nagarajan
Christina Baek
J. Zico Kolter
111
115
0
25 Jun 2021
Shallow Representation is Deep: Learning Uncertainty-aware and
  Worst-case Random Feature Dynamics
Shallow Representation is Deep: Learning Uncertainty-aware and Worst-case Random Feature Dynamics
Diego Agudelo-España
Yassine Nemmour
Bernhard Schölkopf
Jia-Jie Zhu
OODBDL
48
0
0
24 Jun 2021
Compression Implies Generalization
Allan Grønlund
M. Hogsgaard
Lior Kamma
Kasper Green Larsen
MLTAI4CE
23
0
0
15 Jun 2021
Learning Gaussian Mixtures with Generalised Linear Models: Precise
  Asymptotics in High-dimensions
Learning Gaussian Mixtures with Generalised Linear Models: Precise Asymptotics in High-dimensions
Bruno Loureiro
G. Sicuro
Cédric Gerbelot
Alessandro Pacco
Florent Krzakala
Lenka Zdeborová
76
62
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi Zhou
Arthur Gretton
MLT
105
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random
  features and neural tangent regimes
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
84
9
0
04 Jun 2021
Out-of-Distribution Generalization in Kernel Regression
Out-of-Distribution Generalization in Kernel Regression
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
OODDOOD
57
13
0
04 Jun 2021
Framing RNN as a kernel method: A neural ODE approach
Framing RNN as a kernel method: A neural ODE approach
Adeline Fermanian
Pierre Marion
Jean-Philippe Vert
Gérard Biau
91
26
0
02 Jun 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
73
186
0
29 May 2021
Latent Gaussian Model Boosting
Latent Gaussian Model Boosting
Fabio Sigrist
AI4CE
68
23
0
19 May 2021
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Gregor Bachmann
Seyed-Mohsen Moosavi-Dezfooli
Thomas Hofmann
AAML
38
8
0
07 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
81
53
0
28 Apr 2021
How rotational invariance of common kernels prevents generalization in
  high dimensions
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
85
24
0
09 Apr 2021
Fitting Elephants
Fitting Elephants
P. Mitra
26
0
0
31 Mar 2021
Weighted Neural Tangent Kernel: A Generalized and Improved
  Network-Induced Kernel
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel
Lei Tan
Shutong Wu
Xiaolin Huang
28
2
0
22 Mar 2021
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures'
  (Statistical Science, 2001, 16(3), 199-231)
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures' (Statistical Science, 2001, 16(3), 199-231)
Jelena Bradic
Yinchu Zhu
27
0
0
21 Mar 2021
On the Generalization Power of Overfitted Two-Layer Neural Tangent
  Kernel Models
On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models
Peizhong Ju
Xiaojun Lin
Ness B. Shroff
MLT
71
10
0
09 Mar 2021
Exact Gap between Generalization Error and Uniform Convergence in Random
  Feature Models
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models
Zitong Yang
Yu Bai
Song Mei
71
18
0
08 Mar 2021
Trading Signals In VIX Futures
Trading Signals In VIX Futures
M. Avellaneda
T. Li
A. Papanicolaou
Gaozhan Wang
22
4
0
02 Mar 2021
Previous
123456
Next