Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.15055
Cited By
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
29 September 2022
Arthur Jacot
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions"
9 / 9 papers shown
Title
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
OOD
MLT
68
0
0
11 Apr 2025
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
109
0
0
20 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
94
2
0
08 Jul 2024
Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets
Arthur Jacot
Alexandre Kaiser
38
0
0
27 May 2024
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Yuxiao Wen
Arthur Jacot
63
6
0
12 Feb 2024
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff
Arthur Jacot
MLT
26
13
0
30 May 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
68
6
0
24 May 2023
The Role of Linear Layers in Nonlinear Interpolating Networks
Greg Ongie
Rebecca Willett
66
15
0
02 Feb 2022
1