ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07572
  4. Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
ArXivPDFHTML

Papers citing "Neural Tangent Kernel: Convergence and Generalization in Neural Networks"

50 / 2,160 papers shown
Title
On Learning Over-parameterized Neural Networks: A Functional
  Approximation Perspective
On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
Lili Su
Pengkun Yang
MLT
27
53
0
26 May 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer
  Neural Networks
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
MLT
AI4CE
36
38
0
24 May 2019
Neural Temporal-Difference and Q-Learning Provably Converge to Global
  Optima
Neural Temporal-Difference and Q-Learning Provably Converge to Global Optima
Qi Cai
Zhuoran Yang
Jason D. Lee
Zhaoran Wang
36
29
0
24 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural
  Networks on Classification Problems
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
16
33
0
23 May 2019
A type of generalization error induced by initialization in deep neural
  networks
A type of generalization error induced by initialization in deep neural networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
9
50
0
19 May 2019
An Information Theoretic Interpretation to Deep Neural Networks
An Information Theoretic Interpretation to Deep Neural Networks
Shao-Lun Huang
Xiangxiang Xu
Lizhong Zheng
G. Wornell
FAtt
22
41
0
16 May 2019
Do Kernel and Neural Embeddings Help in Training and Generalization?
Do Kernel and Neural Embeddings Help in Training and Generalization?
Arman Rahbar
Emilio Jorge
Devdatt Dubhashi
Morteza Haghir Chehreghani
MLT
25
0
0
13 May 2019
The Effect of Network Width on Stochastic Gradient Descent and
  Generalization: an Empirical Study
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study
Daniel S. Park
Jascha Narain Sohl-Dickstein
Quoc V. Le
Samuel L. Smith
27
57
0
09 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
25
109
0
09 May 2019
Similarity of Neural Network Representations Revisited
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
70
1,361
0
01 May 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
44
905
0
26 Apr 2019
Implicit regularization for deep neural networks driven by an
  Ornstein-Uhlenbeck like process
Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Guy Blanc
Neha Gupta
Gregory Valiant
Paul Valiant
24
144
0
19 Apr 2019
The Impact of Neural Network Overparameterization on Gradient Confusion
  and Stochastic Gradient Descent
The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent
Karthik A. Sankararaman
Soham De
Zheng Xu
Yifan Jiang
Tom Goldstein
ODL
27
103
0
15 Apr 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
38
136
0
10 Apr 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network
  Model with Skip-connections
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
37
22
0
10 Apr 2019
A Comparative Analysis of the Optimization and Generalization Property
  of Two-layer Neural Network and Random Feature Models Under Gradient Descent
  Dynamics
A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics
E. Weinan
Chao Ma
Lei Wu
MLT
19
121
0
08 Apr 2019
Convergence rates for the stochastic gradient descent method for
  non-convex objective functions
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
19
101
0
02 Apr 2019
On the Power and Limitations of Random Features for Understanding Neural
  Networks
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
28
181
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
General Probabilistic Surface Optimization and Log Density Estimation
General Probabilistic Surface Optimization and Log Density Estimation
Dmitry Kopitkov
Vadim Indelman
8
1
0
25 Mar 2019
Towards Characterizing Divergence in Deep Q-Learning
Towards Characterizing Divergence in Deep Q-Learning
Joshua Achiam
Ethan Knight
Pieter Abbeel
27
96
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
31
728
0
19 Mar 2019
Stabilize Deep ResNet with A Sharp Scaling Factor $τ$
Stabilize Deep ResNet with A Sharp Scaling Factor τττ
Huishuai Zhang
Da Yu
Mingyang Yi
Wei Chen
Tie-Yan Liu
32
8
0
17 Mar 2019
Mean Field Analysis of Deep Neural Networks
Mean Field Analysis of Deep Neural Networks
Justin A. Sirignano
K. Spiliopoulos
22
82
0
11 Mar 2019
Function Space Particle Optimization for Bayesian Neural Networks
Function Space Particle Optimization for Bayesian Neural Networks
Ziyu Wang
Tongzheng Ren
Jun Zhu
Bo Zhang
BDL
28
63
0
26 Feb 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
52
1,077
0
18 Feb 2019
Mean-field theory of two-layers neural networks: dimension-free bounds
  and kernel limit
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
33
276
0
16 Feb 2019
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian
  Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Greg Yang
11
284
0
13 Feb 2019
Uniform convergence may be unable to explain generalization in deep
  learning
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMe
AI4CE
17
309
0
13 Feb 2019
Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks
Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks
Phan-Minh Nguyen
AI4CE
24
72
0
07 Feb 2019
Are All Layers Created Equal?
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
20
140
0
06 Feb 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
25
155
0
04 Feb 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
21
94
0
28 Jan 2019
Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
D. Gilboa
B. Chang
Minmin Chen
Greg Yang
S. Schoenholz
Ed H. Chi
Jeffrey Pennington
34
40
0
25 Jan 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
55
961
0
24 Jan 2019
Training Neural Networks as Learning Data-adaptive Kernels: Provable
  Representation and Approximation Benefits
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Xialiang Dou
Tengyuan Liang
MLT
29
43
0
21 Jan 2019
A Theoretical Analysis of Deep Q-Learning
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
23
595
0
01 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
29
38
0
28 Dec 2018
On Lazy Training in Differentiable Programming
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
46
807
0
19 Dec 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
32
765
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
51
1,448
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
44
1,125
0
09 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
29
191
0
29 Oct 2018
A jamming transition from under- to over-parametrization affects loss
  landscape and generalization
A jamming transition from under- to over-parametrization affects loss landscape and generalization
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
M. Wyart
27
151
0
22 Oct 2018
Exchangeability and Kernel Invariance in Trained MLPs
Exchangeability and Kernel Invariance in Trained MLPs
Russell Tsuchida
Fred Roosta
M. Gallagher
14
3
0
19 Oct 2018
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
26
245
0
12 Oct 2018
Information Geometry of Orthogonal Initializations and Training
Information Geometry of Orthogonal Initializations and Training
Piotr A. Sokól
Il-Su Park
AI4CE
80
16
0
09 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
56
1,252
0
04 Oct 2018
Previous
123...424344
Next