ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03703
  4. Cited By
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel
  Training Dynamics

Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics

8 May 2021
Greg Yang
Etai Littwin
ArXivPDFHTML

Papers citing "Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics"

19 / 19 papers shown
Title
Input Space Mode Connectivity in Deep Neural Networks
Input Space Mode Connectivity in Deep Neural Networks
Jakub Vrabel
Ori Shem-Ur
Yaron Oz
David Krueger
56
1
0
09 Sep 2024
u-$\mu$P: The Unit-Scaled Maximal Update Parametrization
u-μ\muμP: The Unit-Scaled Maximal Update Parametrization
Charlie Blake
C. Eichenberg
Josef Dean
Lukas Balles
Luke Y. Prince
Bjorn Deiseroth
Andres Felipe Cruz Salinas
Carlo Luschi
Samuel Weinbach
Douglas Orr
58
9
0
24 Jul 2024
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Jingren Liu
Zhong Ji
YunLong Yu
Jiale Cao
Yanwei Pang
Jungong Han
Xuelong Li
CLL
42
3
0
24 Jul 2024
Weak Correlations as the Underlying Principle for Linearization of
  Gradient-Based Learning Systems
Weak Correlations as the Underlying Principle for Linearization of Gradient-Based Learning Systems
Ori Shem-Ur
Yaron Oz
19
0
0
08 Jan 2024
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
33
11
0
12 Jul 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
38
29
0
06 Apr 2023
Controlled Descent Training
Controlled Descent Training
Viktor Andersson
B. Varga
Vincent Szolnoky
Andreas Syrén
Rebecka Jörnsten
Balázs Kulcsár
43
1
0
16 Mar 2023
Width and Depth Limits Commute in Residual Networks
Width and Depth Limits Commute in Residual Networks
Soufiane Hayou
Greg Yang
42
14
0
01 Feb 2023
An Analysis of Attention via the Lens of Exchangeability and Latent
  Variable Models
An Analysis of Attention via the Lens of Exchangeability and Latent Variable Models
Yufeng Zhang
Boyi Liu
Qi Cai
Lingxiao Wang
Zhaoran Wang
53
11
0
30 Dec 2022
Generalisation under gradient descent via deterministic PAC-Bayes
Generalisation under gradient descent via deterministic PAC-Bayes
Eugenio Clerico
Tyler Farghly
George Deligiannidis
Benjamin Guedj
Arnaud Doucet
31
4
0
06 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
30
109
0
31 Aug 2022
Making Look-Ahead Active Learning Strategies Feasible with Neural
  Tangent Kernels
Making Look-Ahead Active Learning Strategies Feasible with Neural Tangent Kernels
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
30
20
0
25 Jun 2022
Large-width asymptotics for ReLU neural networks with $α$-Stable
  initializations
Large-width asymptotics for ReLU neural networks with ααα-Stable initializations
Stefano Favaro
S. Fortini
Stefano Peluchetti
20
2
0
16 Jun 2022
Overcoming the Spectral Bias of Neural Value Approximation
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
34
25
0
09 Jun 2022
Empirical Phase Diagram for Three-layer Neural Networks with Infinite
  Width
Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Hanxu Zhou
Qixuan Zhou
Zhenyuan Jin
Tao Luo
Yaoyu Zhang
Zhi-Qin John Xu
25
20
0
24 May 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
40
78
0
19 May 2022
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot
  Hyperparameter Transfer
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer
Greg Yang
J. E. Hu
Igor Babuschkin
Szymon Sidor
Xiaodong Liu
David Farhi
Nick Ryder
J. Pachocki
Weizhu Chen
Jianfeng Gao
26
148
0
07 Mar 2022
A generalization gap estimation for overparameterized models via the
  Langevin functional variance
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
38
1
0
07 Dec 2021
Learning Curves for Continual Learning in Neural Networks:
  Self-Knowledge Transfer and Forgetting
Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting
Ryo Karakida
S. Akaho
CLL
32
11
0
03 Dec 2021
1