ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00651
  4. Cited By
Asymptotics of representation learning in finite Bayesian neural
  networks

Asymptotics of representation learning in finite Bayesian neural networks

1 June 2021
Jacob A. Zavatone-Veth
Abdulkadir Canatar
Benjamin S. Ruben
Cengiz Pehlevan
ArXivPDFHTML

Papers citing "Asymptotics of representation learning in finite Bayesian neural networks"

20 / 20 papers shown
Title
Using Autodiff to Estimate Posterior Moments, Marginals and Samples
Using Autodiff to Estimate Posterior Moments, Marginals and Samples
Sam Bowyer
Thomas Heap
Laurence Aitchison
43
1
0
26 Oct 2023
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and
  Scaling Limit
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit
Blake Bordelon
Lorenzo Noci
Mufan Li
Boris Hanin
Cengiz Pehlevan
35
22
0
28 Sep 2023
A theory of data variability in Neural Network Bayesian inference
A theory of data variability in Neural Network Bayesian inference
Javed Lindner
David Dahmen
Michael Krämer
M. Helias
BDL
32
1
0
31 Jul 2023
Neural Network Field Theories: Non-Gaussianity, Actions, and Locality
Neural Network Field Theories: Non-Gaussianity, Actions, and Locality
M. Demirtaş
James Halverson
Anindita Maiti
M. Schwartz
Keegan Stoner
AI4CE
23
10
0
06 Jul 2023
Structures of Neural Network Effective Theories
Structures of Neural Network Effective Theories
cCaugin Ararat
Tianji Cai
Cem Tekin
Zhengkang Zhang
60
7
0
03 May 2023
Learning curves for deep structured Gaussian feature models
Learning curves for deep structured Gaussian feature models
Jacob A. Zavatone-Veth
Cengiz Pehlevan
MLT
30
11
0
01 Mar 2023
Neural networks learn to magnify areas near decision boundaries
Neural networks learn to magnify areas near decision boundaries
Jacob A. Zavatone-Veth
Sheng Yang
Julian Rubinfien
Cengiz Pehlevan
MLT
AI4CE
30
6
0
26 Jan 2023
The Curious Case of Benign Memorization
The Curious Case of Benign Memorization
Sotiris Anagnostidis
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
AAML
54
8
0
25 Oct 2022
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at
  Initialization
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Mufan Li
Mihai Nica
Daniel M. Roy
53
37
0
06 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
45
77
0
19 May 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDL
MLT
30
27
0
01 Mar 2022
On neural network kernels and the storage capacity problem
On neural network kernels and the storage capacity problem
Jacob A. Zavatone-Veth
Cengiz Pehlevan
11
6
0
12 Jan 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
46
0
0
03 Jan 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Zohar Ringel
40
51
0
31 Dec 2021
Depth induces scale-averaging in overparameterized linear Bayesian
  neural networks
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Jacob A. Zavatone-Veth
Cengiz Pehlevan
BDL
UQCV
MDE
41
9
0
23 Nov 2021
The edge of chaos: quantum field theory and deep neural networks
The edge of chaos: quantum field theory and deep neural networks
Kevin T. Grosvenor
R. Jefferson
40
22
0
27 Sep 2021
A theory of representation learning gives a deep generalisation of
  kernel methods
A theory of representation learning gives a deep generalisation of kernel methods
Adam X. Yang
Maxime Robeyns
Edward Milsom
Ben Anson
Nandi Schoots
Laurence Aitchison
BDL
32
10
0
30 Aug 2021
The Low-Rank Simplicity Bias in Deep Networks
The Low-Rank Simplicity Bias in Deep Networks
Minyoung Huh
H. Mobahi
Richard Y. Zhang
Brian Cheung
Pulkit Agrawal
Phillip Isola
35
110
0
18 Mar 2021
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
244
350
0
14 Jun 2018
1