ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.05243
  4. Cited By
On the Generalization Power of Overfitted Two-Layer Neural Tangent
  Kernel Models
v1v2v3 (latest)

On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models

9 March 2021
Peizhong Ju
Xiaojun Lin
Ness B. Shroff
    MLT
ArXiv (abs)PDFHTML

Papers citing "On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models"

10 / 10 papers shown
Title
The Dynamics of Gradient Descent for Overparametrized Neural Networks
The Dynamics of Gradient Descent for Overparametrized Neural Networks
Siddhartha Satpathi
R. Srikant
MLTAI4CE
41
14
0
13 May 2021
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
  Kernel, a Precise Phase Transition, and the Corresponding Double Descent
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent
Zhenyu Liao
Romain Couillet
Michael W. Mahoney
79
93
0
09 Jun 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
64
83
0
19 Feb 2020
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
69
243
0
27 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
212
746
0
19 Mar 2019
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
201
775
0
12 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLTODL
233
1,276
0
04 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
219
653
0
03 Aug 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
72
420
0
05 Feb 2018
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
351
4,635
0
10 Nov 2016
1