ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00553
  4. Cited By
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth
  and Initialization
v1v2 (latest)

Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization

International Conference on Machine Learning (ICML), 2022
1 February 2022
Mariia Seleznova
Gitta Kutyniok
ArXiv (abs)PDFHTML

Papers citing "Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization"

8 / 8 papers shown
Title
Depth-induced NTK: Bridging Over-parameterized Neural Networks and Deep Neural Kernels
Depth-induced NTK: Bridging Over-parameterized Neural Networks and Deep Neural Kernels
Yong-Ming Tian
Shuang Liang
Shao-Qun Zhang
Feng-Lei Fan
0
0
0
05 Nov 2025
Finite-Width Neural Tangent Kernels from Feynman Diagrams
Finite-Width Neural Tangent Kernels from Feynman Diagrams
Max Guillen
Philipp Misof
Jan E. Gerken
60
0
0
15 Aug 2025
Neural Tangent Kernel Analysis to Probe Convergence in Physics-informed Neural Solvers: PIKANs vs. PINNs
Neural Tangent Kernel Analysis to Probe Convergence in Physics-informed Neural Solvers: PIKANs vs. PINNs
Salah A Faroughi
Farinaz Mostajeran
65
1
0
09 Jun 2025
Infinite Width Graph Neural Networks for Node Regression/ Classification
Infinite Width Graph Neural Networks for Node Regression/ Classification
Yunus Cobanoglu
AI4CE
136
2
0
12 Oct 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimensionNeural Information Processing Systems (NeurIPS), 2023
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
219
22
0
23 May 2023
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
166
9
0
17 Sep 2022
Diverse Weight Averaging for Out-of-Distribution Generalization
Diverse Weight Averaging for Out-of-Distribution GeneralizationNeural Information Processing Systems (NeurIPS), 2022
Alexandre Ramé
Matthieu Kirchmeyer
Thibaud Rahier
A. Rakotomamonjy
Patrick Gallinari
Matthieu Cord
OOD
389
153
0
19 May 2022
Universal Statistics of Fisher Information in Deep Neural Networks: Mean
  Field Approach
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach
Ryo Karakida
S. Akaho
S. Amari
FedML
368
156
0
04 Jun 2018
1