ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00034
  4. Cited By
Neural Networks as Kernel Learners: The Silent Alignment Effect
v1v2 (latest)

Neural Networks as Kernel Learners: The Silent Alignment Effect

29 October 2021
Alexander B. Atanasov
Blake Bordelon
Cengiz Pehlevan
    MLT
ArXiv (abs)PDFHTML

Papers citing "Neural Networks as Kernel Learners: The Silent Alignment Effect"

13 / 63 papers shown
Title
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
98
20
0
11 Oct 2022
The Influence of Learning Rule on Representation Dynamics in Wide Neural
  Networks
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
115
23
0
05 Oct 2022
On Kernel Regression with Data-Dependent Kernels
On Kernel Regression with Data-Dependent Kernels
James B. Simon
BDL
58
3
0
04 Sep 2022
A view of mini-batch SGD via generating functions: conditions of
  convergence, phase transitions, benefit from negative momenta
A view of mini-batch SGD via generating functions: conditions of convergence, phase transitions, benefit from negative momenta
Maksim Velikanov
Denis Kuznedelev
Dmitry Yarotsky
75
8
0
22 Jun 2022
Limitations of the NTK for Understanding Generalization in Deep Learning
Limitations of the NTK for Understanding Generalization in Deep Learning
Nikhil Vyas
Yamini Bansal
Preetum Nakkiran
116
34
0
20 Jun 2022
Spectral Bias Outside the Training Set for Deep Networks in the Kernel
  Regime
Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime
Benjamin Bowman
Guido Montúfar
82
15
0
06 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
87
85
0
19 May 2022
A Framework and Benchmark for Deep Batch Active Learning for Regression
A Framework and Benchmark for Deep Batch Active Learning for Regression
David Holzmüller
Viktor Zaverkin
Johannes Kastner
Ingo Steinwart
UQCVBDLGP
106
37
0
17 Mar 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral
  Conditions
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
97
8
0
02 Feb 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Zohar Ringel
95
55
0
31 Dec 2021
Depth induces scale-averaging in overparameterized linear Bayesian
  neural networks
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Jacob A. Zavatone-Veth
Cengiz Pehlevan
BDLUQCVMDE
99
11
0
23 Nov 2021
A Theory of Neural Tangent Kernel Alignment and Its Influence on
  Training
A Theory of Neural Tangent Kernel Alignment and Its Influence on Training
H. Shan
Blake Bordelon
62
12
0
29 May 2021
Properties of the After Kernel
Properties of the After Kernel
Philip M. Long
66
29
0
21 May 2021
Previous
12