Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.17573
Cited By
Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets
27 May 2024
Arthur Jacot
Alexandre Kaiser
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets"
23 / 23 papers shown
Title
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Yuxiao Wen
Arthur Jacot
90
7
0
12 Feb 2024
Residual Alignment: Uncovering the Mechanisms of Residual Networks
Jianing Li
Vardan Papyan
AI4TS
25
5
0
17 Jan 2024
Mechanism of feature learning in convolutional neural networks
Daniel Beaglehole
Adityanarayanan Radhakrishnan
Parthe Pandit
Misha Belkin
FAtt
MLT
56
14
0
01 Sep 2023
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff
Arthur Jacot
MLT
70
14
0
30 May 2023
A Rainbow in Deep Network Black Boxes
Florentin Guth
Brice Ménard
G. Rochette
S. Mallat
74
10
0
29 May 2023
Implicit bias of SGD in
L
2
L_{2}
L
2
-regularized linear DNNs: One-way jumps from high to low rank
Zihan Wang
Arthur Jacot
50
19
0
25 May 2023
Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model
Peter Súkeník
Marco Mondelli
Christoph H. Lampert
43
24
0
22 May 2023
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
58
26
0
29 Sep 2022
Feature Learning in
L
2
L_{2}
L
2
-regularized DNNs: Attraction/Repulsion and Sparsity
Arthur Jacot
Eugene Golikov
Clément Hongler
Franck Gabriel
MLT
56
17
0
31 May 2022
Turnpike in optimal control of PDEs, ResNets, and beyond
Borjan Geshkovski
Enrique Zuazua
28
33
0
08 Feb 2022
The staircase property: How hierarchical structure can guide deep learning
Emmanuel Abbe
Enric Boix-Adserà
Matthew Brennan
Guy Bresler
Dheeraj M. Nagaraj
37
51
0
24 Aug 2021
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
Zhiyuan Li
Yuping Luo
Kaifeng Lyu
49
125
0
17 Dec 2020
Prevalence of Neural Collapse during the terminal phase of deep learning training
Vardan Papyan
Xuemei Han
D. Donoho
135
563
0
18 Aug 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
77
336
0
11 Feb 2020
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
131
1,382
0
01 May 2019
Towards Understanding Linear Word Analogies
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
43
114
0
11 Oct 2018
Neural Ordinary Differential Equations
T. Chen
Yulia Rubanova
J. Bettencourt
David Duvenaud
AI4CE
278
5,024
0
19 Jun 2018
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
MDE
76
408
0
01 Jun 2018
Characterizing Implicit Bias in Terms of Optimization Geometry
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
AI4CE
69
404
0
22 Feb 2018
Deep Learning and the Information Bottleneck Principle
Naftali Tishby
Noga Zaslavsky
DRL
145
1,570
0
09 Mar 2015
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
143
703
0
30 Dec 2014
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
394
15,825
0
12 Nov 2013
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
585
31,406
0
16 Jan 2013
1