ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07572
  4. Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
v1v2v3v4 (latest)

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
ArXiv (abs)PDFHTML

Papers citing "Neural Tangent Kernel: Convergence and Generalization in Neural Networks"

50 / 1,193 papers shown
Title
LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection
LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection
Xinyue Zeng
Haohui Wang
Junhong Lin
Jun Wu
Tyler Cody
Dawei Zhou
431
0
0
01 May 2025
On the Importance of Gaussianizing Representations
On the Importance of Gaussianizing Representations
Daniel Eftekhari
Vardan Papyan
77
0
0
01 May 2025
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of recurrent neural networks
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of recurrent neural networks
Shotaro Takasu
Toshio Aoyagi
107
0
0
28 Apr 2025
Reliable and Efficient Inverse Analysis using Physics-Informed Neural Networks with Distance Functions and Adaptive Weight Tuning
Reliable and Efficient Inverse Analysis using Physics-Informed Neural Networks with Distance Functions and Adaptive Weight Tuning
Shota Deguchi
Mitsuteru Asai
PINNAI4CE
174
0
0
25 Apr 2025
Deep learning with missing data
Deep learning with missing data
Tianyi Ma
Tengyao Wang
R. Samworth
269
1
0
21 Apr 2025
Hadamard product in deep learning: Introduction, Advances and Challenges
Hadamard product in deep learning: Introduction, Advances and Challenges
Grigorios G. Chrysos
Yongtao Wu
Razvan Pascanu
Philip Torr
Volkan Cevher
AAML
170
2
0
17 Apr 2025
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion models
John J. Vastola
DiffM
486
5
0
16 Apr 2025
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
Hongkang Li
Yihua Zhang
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
MoMe
258
10
0
15 Apr 2025
Divergence of Empirical Neural Tangent Kernel in Classification Problems
Divergence of Empirical Neural Tangent Kernel in Classification Problems
Zixiong Yu
Songtao Tian
Guhan Chen
68
0
0
15 Apr 2025
Free Random Projection for In-Context Reinforcement Learning
Free Random Projection for In-Context Reinforcement Learning
Tomohiro Hayase
B. Collins
Nakamasa Inoue
91
0
0
09 Apr 2025
PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity
PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity
Mustafa Burak Gurbuz
Xingyu Zheng
C. Dovrolis
OOD
94
0
0
07 Apr 2025
ReLU Networks as Random Functions: Their Distribution in Probability Space
ReLU Networks as Random Functions: Their Distribution in Probability Space
Shreyas Chaudhari
J. M. F. Moura
87
0
0
28 Mar 2025
On the Cone Effect in the Learning Dynamics
On the Cone Effect in the Learning Dynamics
Zhanpeng Zhou
Yongyi Yang
Jie Ren
Mahito Sugiyama
Junchi Yan
116
0
0
20 Mar 2025
Neural Tangent Kernel of Neural Networks with Loss Informed by Differential Operators
Weiye Gan
Yicheng Li
Q. Lin
Zuoqiang Shi
75
0
0
14 Mar 2025
Contextual Similarity Distillation: Ensemble Uncertainties with a Single Model
Contextual Similarity Distillation: Ensemble Uncertainties with a Single Model
Moritz A. Zanger
Pascal R. van der Vaart
Wendelin Bohmer
M. Spaan
UQCVBDL
507
2
0
14 Mar 2025
Learning richness modulates equality reasoning in neural networks
Learning richness modulates equality reasoning in neural networks
William L. Tong
Cengiz Pehlevan
66
0
0
12 Mar 2025
Context-aware Biases for Length Extrapolation
Context-aware Biases for Length Extrapolation
Ali Veisi
Hamidreza Amirzadeh
Amir Mansourian
165
1
0
11 Mar 2025
PIED: Physics-Informed Experimental Design for Inverse Problems
Apivich Hemachandra
Gregory Kang Ruey Lau
Szu Hui Ng
Bryan Kian Hsiang Low
PINN
108
0
0
10 Mar 2025
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis
Richard Klein
Benjamin Rosman
Andrew M. Saxe
MLT
148
2
0
08 Mar 2025
Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step under Gaussian Mixtures Data with Structure
Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step under Gaussian Mixtures Data with Structure
Samet Demir
Zafer Dogan
MLT
94
0
0
02 Mar 2025
Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)
Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)
Yoonsoo Nam
Seok Hyeong Lee
Clementine Domine
Yea Chan Park
Charles London
Wonyl Choi
Niclas Goring
Seungjai Lee
AI4CE
209
1
0
28 Feb 2025
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation
Pavel Rumiantsev
Mark Coates
96
0
0
27 Feb 2025
Training Large Neural Networks With Low-Dimensional Error Feedback
Training Large Neural Networks With Low-Dimensional Error Feedback
Maher Hanut
Jonathan Kadmon
133
1
0
27 Feb 2025
Bridging Critical Gaps in Convergent Learning: How Representational Alignment Evolves Across Layers, Training, and Distribution Shifts
Bridging Critical Gaps in Convergent Learning: How Representational Alignment Evolves Across Layers, Training, and Distribution Shifts
Chaitanya Kapoor
Sudhanshu Srivastava
Meenakshi Khosla
116
0
0
26 Feb 2025
Convergence of Shallow ReLU Networks on Weakly Interacting Data
Convergence of Shallow ReLU Networks on Weakly Interacting Data
Léo Dana
Francis R. Bach
Loucas Pillaud-Vivien
MLT
95
2
0
24 Feb 2025
FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning
FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning
Jason Jingzhou Liu
Yulong Li
Kenneth Shaw
Tony Tao
Ruslan Salakhutdinov
Deepak Pathak
OffRL
159
2
0
24 Feb 2025
Feature maps for the Laplacian kernel and its generalizations
Feature maps for the Laplacian kernel and its generalizations
Sudhendu Ahir
Parthe Pandit
101
0
0
24 Feb 2025
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
355
0
0
20 Feb 2025
Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits
Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits
Kaining Zhang
Liu Liu
Min-hsiu Hsieh
Dacheng Tao
147
63
0
20 Feb 2025
Linear Mode Connectivity in Differentiable Tree Ensembles
Linear Mode Connectivity in Differentiable Tree Ensembles
Ryuichi Kanoh
M. Sugiyama
228
1
0
17 Feb 2025
EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Networks
EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Networks
Michael Arbel
David Salinas
Frank Hutter
133
3
0
10 Feb 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
Cengiz Pehlevan
AI4CE
241
1
0
04 Feb 2025
Observation Noise and Initialization in Wide Neural Networks
Observation Noise and Initialization in Wide Neural Networks
Sergio Calvo-Ordoñez
Jonathan Plenk
Richard Bergna
Alvaro Cartea
Jose Miguel Hernandez-Lobato
Konstantina Palla
Kamil Ciosek
116
1
0
03 Feb 2025
STAF: Sinusoidal Trainable Activation Functions for Implicit Neural Representation
STAF: Sinusoidal Trainable Activation Functions for Implicit Neural Representation
Alireza Morsali
MohammadJavad Vaez
Mohammadhossein Soltani
Amirhossein Kazerouni
Babak Taati
Morteza Mohammad-Noori
456
1
0
02 Feb 2025
GraphMinNet: Learning Dependencies in Graphs with Light Complexity Minimal Architecture
GraphMinNet: Learning Dependencies in Graphs with Light Complexity Minimal Architecture
Md Atik Ahamed
Andrew Cheng
Qiang Ye
Q. Cheng
GNN
107
0
0
01 Feb 2025
Position: Curvature Matrices Should Be Democratized via Linear Operators
Position: Curvature Matrices Should Be Democratized via Linear Operators
Felix Dangel
Runa Eschenhagen
Weronika Ormaniec
Andres Fernandez
Lukas Tatzel
Agustinus Kristiadi
135
4
0
31 Jan 2025
Algebra Unveils Deep Learning -- An Invitation to Neuroalgebraic Geometry
Algebra Unveils Deep Learning -- An Invitation to Neuroalgebraic Geometry
Giovanni Luca Marchetti
Vahid Shahverdi
Stefano Mereta
Matthew Trager
Kathlén Kohn
153
2
0
31 Jan 2025
A theoretical framework for overfitting in energy-based modeling
A theoretical framework for overfitting in energy-based modeling
Giovanni Catania
A. Decelle
Cyril Furtlehner
Beatriz Seoane
179
2
0
31 Jan 2025
Scanning Trojaned Models Using Out-of-Distribution Samples
Scanning Trojaned Models Using Out-of-Distribution Samples
Hossein Mirzaei
Ali Ansari
Bahar Dibaei Nia
Mojtaba Nafez
Moein Madadi
...
Kian Shamsaie
Mahdi Hajialilue
Jafar Habibi
Mohammad Sabokrou
M. Rohban
OODD
143
3
0
28 Jan 2025
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach to Navigate Knowledge Conflicts
Wenju Sun
Qingyong Li
Wen Wang
Yangli-ao Geng
Boyang Li
192
5
0
28 Jan 2025
Decentralized Low-Rank Fine-Tuning of Large Language Models
Decentralized Low-Rank Fine-Tuning of Large Language Models
Sajjad Ghiasvand
Mahnoosh Alizadeh
Ramtin Pedarsani
ALM
152
2
0
26 Jan 2025
On Learning Representations for Tabular Data Distillation
On Learning Representations for Tabular Data Distillation
Inwon Kang
Parikshit Ram
Yi Zhou
Horst Samulowitz
Oshani Seneviratne
DD
113
0
0
23 Jan 2025
Physics of Skill Learning
Physics of Skill Learning
Ziming Liu
Yizhou Liu
Eric J. Michaud
Jeff Gore
Max Tegmark
119
2
0
21 Jan 2025
Generating visual explanations from deep networks using implicit neural representations
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GANFAtt
111
0
0
20 Jan 2025
Issues with Neural Tangent Kernel Approach to Neural Networks
Issues with Neural Tangent Kernel Approach to Neural Networks
Haoran Liu
Anthony S. Tai
David J. Crandall
Chunfeng Huang
105
0
0
19 Jan 2025
Flexible task abstractions emerge in linear networks with fast and bounded units
Flexible task abstractions emerge in linear networks with fast and bounded units
Kai Sandbrink
Jan P. Bauer
A. Proca
Andrew M. Saxe
Christopher Summerfield
Ali Hummos
121
2
0
17 Jan 2025
Globally Convergent Variational Inference
Globally Convergent Variational Inference
Declan McNamara
J. Loper
Jeffrey Regier
101
0
0
14 Jan 2025
Geometry and Optimization of Shallow Polynomial Networks
Geometry and Optimization of Shallow Polynomial Networks
Yossi Arjevani
Joan Bruna
Joe Kileel
Elzbieta Polak
Matthew Trager
86
3
0
10 Jan 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
150
1
0
10 Jan 2025
Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit
Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit
Oleg Filatov
Jan Ebert
Jiangtao Wang
Stefan Kesselheim
115
4
0
10 Jan 2025
Previous
12345...222324
Next