ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01396
  4. Cited By
To understand deep learning we need to understand kernel learning
v1v2v3 (latest)

To understand deep learning we need to understand kernel learning

5 February 2018
M. Belkin
Siyuan Ma
Soumik Mandal
ArXiv (abs)PDFHTML

Papers citing "To understand deep learning we need to understand kernel learning"

50 / 271 papers shown
Title
Generalization Error of Generalized Linear Models in High Dimensions
Generalization Error of Generalized Linear Models in High Dimensions
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
AI4CE
59
39
0
01 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
130
176
0
23 Apr 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep
  Neural Networks
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
Tengyuan Liang
Hai Tran-Bach
48
11
0
09 Apr 2020
Convolutional Spectral Kernel Learning
Convolutional Spectral Kernel Learning
Jian Li
Yong Liu
Weiping Wang
BDL
29
5
0
28 Feb 2020
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi-An Ma
101
186
0
26 Feb 2020
Understanding and Mitigating the Tradeoff Between Robustness and
  Accuracy
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
104
229
0
25 Feb 2020
Precise Tradeoffs in Adversarial Training for Linear Regression
Precise Tradeoffs in Adversarial Training for Linear Regression
Adel Javanmard
Mahdi Soltanolkotabi
Hamed Hassani
AAML
83
109
0
24 Feb 2020
Diversity sampling is an implicit regularization for kernel methods
Diversity sampling is an implicit regularization for kernel methods
Michaël Fanuel
J. Schreurs
Johan A. K. Suykens
63
14
0
20 Feb 2020
Characterizing Structural Regularities of Labeled Data in
  Overparameterized Models
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Ziheng Jiang
Chiyuan Zhang
Kunal Talwar
Michael C. Mozer
TDI
68
104
0
08 Feb 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
284
208
0
07 Feb 2020
Interpolating Predictors in High-Dimensional Factor Regression
Interpolating Predictors in High-Dimensional Factor Regression
F. Bunea
Seth Strimas-Mackey
M. Wegkamp
73
12
0
06 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
133
70
0
05 Feb 2020
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Peizhong Ju
Xiaojun Lin
Jia Liu
131
7
0
02 Feb 2020
A Corrective View of Neural Networks: Representation, Memorization and
  Learning
A Corrective View of Neural Networks: Representation, Memorization and Learning
Guy Bresler
Dheeraj M. Nagaraj
MLT
77
18
0
01 Feb 2020
Risk of the Least Squares Minimum Norm Estimator under the Spike
  Covariance Model
Risk of the Least Squares Minimum Norm Estimator under the Spike Covariance Model
Yasaman Mahdaviyeh
Zacharie Naulet
81
4
0
31 Dec 2019
Discriminative Clustering with Representation Learning with any Ratio of
  Labeled to Unlabeled Data
Discriminative Clustering with Representation Learning with any Ratio of Labeled to Unlabeled Data
Corinne Jones
Vincent Roulet
Zaïd Harchaoui
109
1
0
30 Dec 2019
The Generalization Error of the Minimum-norm Solutions for
  Over-parameterized Neural Networks
The Generalization Error of the Minimum-norm Solutions for Over-parameterized Neural Networks
E. Weinan
Chao Ma
Lei Wu
42
14
0
15 Dec 2019
Double descent in the condition number
Double descent in the condition number
T. Poggio
Gil Kur
Andy Banburski
67
27
0
12 Dec 2019
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
82
78
0
10 Dec 2019
In Defense of Uniform Convergence: Generalization via derandomization
  with an application to interpolating predictors
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
AI4CE
97
65
0
09 Dec 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
126
148
0
13 Nov 2019
The Local Elasticity of Neural Networks
The Local Elasticity of Neural Networks
Hangfeng He
Weijie J. Su
149
46
0
15 Oct 2019
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAMLOOD
92
85
0
09 Oct 2019
The Role of Neural Network Activation Functions
The Role of Neural Network Activation Functions
Rahul Parhi
Robert D. Nowak
81
12
0
05 Oct 2019
On the Multiple Descent of Minimum-Norm Interpolants and Restricted
  Lower Isometry of Kernels
On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels
Tengyuan Liang
Alexander Rakhlin
Xiyu Zhai
103
29
0
27 Aug 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
126
165
0
25 Aug 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
162
640
0
14 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
126
780
0
26 Jun 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
194
504
0
12 Jun 2019
Understanding overfitting peaks in generalization error: Analytical risk
  curves for $l_2$ and $l_1$ penalized interpolation
Understanding overfitting peaks in generalization error: Analytical risk curves for l2l_2l2​ and l1l_1l1​ penalized interpolation
P. Mitra
83
50
0
09 Jun 2019
Deep Semi-Supervised Anomaly Detection
Deep Semi-Supervised Anomaly Detection
Lukas Ruff
Robert A. Vandermeulen
Nico Görnitz
Alexander Binder
Emmanuel Müller
K. Müller
Marius Kloft
UQCV
66
549
0
06 Jun 2019
Bad Global Minima Exist and SGD Can Reach Them
Bad Global Minima Exist and SGD Can Reach Them
Shengchao Liu
Dimitris Papailiopoulos
D. Achlioptas
91
81
0
06 Jun 2019
MaxiMin Active Learning in Overparameterized Model Classes}
MaxiMin Active Learning in Overparameterized Model Classes}
Mina Karzand
Robert D. Nowak
54
20
0
29 May 2019
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
128
260
0
29 May 2019
Interpretable deep Gaussian processes with moments
Interpretable deep Gaussian processes with moments
Chi-Ken Lu
Scott Cheng-Hsin Yang
Xiaoran Hao
Patrick Shafto
84
19
0
27 May 2019
Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low
  Noise Acceleration
Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration
Kwang-Sung Jun
Ashok Cutkosky
Francesco Orabona
66
20
0
25 May 2019
Do Kernel and Neural Embeddings Help in Training and Generalization?
Do Kernel and Neural Embeddings Help in Training and Generalization?
Arman Rahbar
Emilio Jorge
Devdatt Dubhashi
Morteza Haghir Chehreghani
MLT
106
0
0
13 May 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
97
243
0
27 Apr 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
289
928
0
26 Apr 2019
Sparse Learning for Variable Selection with Structures and
  Nonlinearities
Sparse Learning for Variable Selection with Structures and Nonlinearities
Magda Gregorova
67
1
0
26 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
302
747
0
19 Mar 2019
Shallow Neural Networks for Fluid Flow Reconstruction with Limited
  Sensors
Shallow Neural Networks for Fluid Flow Reconstruction with Limited Sensors
N. Benjamin Erichson
L. Mathelin
Z. Yao
Steven L. Brunton
Michael W. Mahoney
J. Nathan Kutz
AI4CE
61
34
0
20 Feb 2019
Uniform convergence may be unable to explain generalization in deep
  learning
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMeAI4CE
98
317
0
13 Feb 2019
Towards moderate overparameterization: global convergence guarantees for
  training shallow neural networks
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
77
323
0
12 Feb 2019
KTBoost: Combined Kernel and Tree Boosting
KTBoost: Combined Kernel and Tree Boosting
Fabio Sigrist
87
27
0
11 Feb 2019
Are All Layers Created Equal?
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
111
140
0
06 Feb 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODLMLTAI4CE
153
158
0
04 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
234
974
0
24 Jan 2019
Training Neural Networks as Learning Data-adaptive Kernels: Provable
  Representation and Approximation Benefits
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Xialiang Dou
Tengyuan Liang
MLT
83
42
0
21 Jan 2019
Consistency of Interpolation with Laplace Kernels is a High-Dimensional
  Phenomenon
Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon
Alexander Rakhlin
Xiyu Zhai
127
79
0
28 Dec 2018
Previous
123456
Next