ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.15383
  4. Cited By
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs

Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs

31 December 2021
Inbar Seroussi
Gadi Naveh
Z. Ringel
ArXivPDFHTML

Papers citing "Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs"

32 / 32 papers shown
Title
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
100
0
0
06 May 2025
Deep Neural Nets as Hamiltonians
Deep Neural Nets as Hamiltonians
Mike Winer
Boris Hanin
115
0
0
31 Mar 2025
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Chanwoo Chun
SueYeon Chung
Daniel D. Lee
24
1
0
23 Oct 2024
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel
  Machines
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines
Edward Milsom
Ben Anson
Laurence Aitchison
26
0
0
08 Oct 2024
Coding schemes in neural networks learning classification tasks
Coding schemes in neural networks learning classification tasks
Alexander van Meegen
H. Sompolinsky
31
6
0
24 Jun 2024
Graph Neural Networks Do Not Always Oversmooth
Graph Neural Networks Do Not Always Oversmooth
Bastian Epping
Alexandre René
M. Helias
Michael T. Schaub
38
3
0
04 Jun 2024
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Wilsonian Renormalization of Neural Network Gaussian Processes
Wilsonian Renormalization of Neural Network Gaussian Processes
Jessica N. Howard
Ro Jefferson
Anindita Maiti
Z. Ringel
BDL
62
3
0
09 May 2024
Towards Understanding Inductive Bias in Transformers: A View From
  Infinity
Towards Understanding Inductive Bias in Transformers: A View From Infinity
Itay Lavie
Guy Gur-Ari
Z. Ringel
32
1
0
07 Feb 2024
Asymptotics of feature learning in two-layer networks after one
  gradient-step
Asymptotics of feature learning in two-layer networks after one gradient-step
Hugo Cui
Luca Pesce
Yatin Dandi
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
Bruno Loureiro
MLT
44
16
0
07 Feb 2024
Grokking as a First Order Phase Transition in Two Layer Networks
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin
Inbar Seroussi
Z. Ringel
34
15
0
05 Oct 2023
Convolutional Deep Kernel Machines
Convolutional Deep Kernel Machines
Edward Milsom
Ben Anson
Laurence Aitchison
BDL
18
5
0
18 Sep 2023
Speed Limits for Deep Learning
Speed Limits for Deep Learning
Inbar Seroussi
Alexander A. Alemi
M. Helias
Z. Ringel
24
0
0
27 Jul 2023
Local Kernel Renormalization as a mechanism for feature learning in
  overparametrized Convolutional Neural Networks
Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural Networks
R. Aiudi
R. Pacelli
A. Vezzani
R. Burioni
P. Rotondo
MLT
21
15
0
21 Jul 2023
Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
  Networks
Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural Networks
Inbar Seroussi
Asaf Miron
Z. Ringel
PINN
34
0
0
12 Jul 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
Neural Network Field Theories: Non-Gaussianity, Actions, and Locality
Neural Network Field Theories: Non-Gaussianity, Actions, and Locality
M. Demirtaş
James Halverson
Anindita Maiti
M. Schwartz
Keegan Stoner
AI4CE
21
10
0
06 Jul 2023
Finite-time Lyapunov exponents of deep neural networks
Finite-time Lyapunov exponents of deep neural networks
L. Storm
H. Linander
J. Bec
K. Gustavsson
Bernhard Mehlig
16
6
0
21 Jun 2023
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
Yatin Dandi
Florent Krzakala
Bruno Loureiro
Luca Pesce
Ludovic Stephan
MLT
34
25
0
29 May 2023
Feature-Learning Networks Are Consistent Across Widths At Realistic
  Scales
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Nikhil Vyas
Alexander B. Atanasov
Blake Bordelon
Depen Morwani
Sabarish Sainathan
C. Pehlevan
24
22
0
28 May 2023
Structures of Neural Network Effective Theories
Structures of Neural Network Effective Theories
cCaugin Ararat
Tianji Cai
Cem Tekin
Zhengkang Zhang
49
7
0
03 May 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
Bayesian Interpolation with Deep Linear Networks
Bayesian Interpolation with Deep Linear Networks
Boris Hanin
Alexander Zlokapa
34
25
0
29 Dec 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
24
79
0
19 May 2022
Random matrix analysis of deep neural network weight matrices
Random matrix analysis of deep neural network weight matrices
M. Thamm
Max Staats
B. Rosenow
27
12
0
28 Mar 2022
Wide Mean-Field Bayesian Neural Networks Ignore the Data
Wide Mean-Field Bayesian Neural Networks Ignore the Data
Beau Coker
W. Bruinsma
David R. Burt
Weiwei Pan
Finale Doshi-Velez
UQCV
BDL
37
22
0
23 Feb 2022
Error Scaling Laws for Kernel Classification under Source and Capacity
  Conditions
Error Scaling Laws for Kernel Classification under Source and Capacity Conditions
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
46
10
0
29 Jan 2022
A theory of representation learning gives a deep generalisation of
  kernel methods
A theory of representation learning gives a deep generalisation of kernel methods
Adam X. Yang
Maxime Robeyns
Edward Milsom
Ben Anson
Nandi Schoots
Laurence Aitchison
BDL
24
10
0
30 Aug 2021
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
133
200
0
07 Feb 2020
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
173
51
0
17 Oct 2019
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1