Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.08010
Cited By
v1
v2 (latest)
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
12 February 2024
Yuxiao Wen
Arthur Jacot
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning"
24 / 24 papers shown
Title
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Sajad Movahedi
Antonio Orvieto
Seyed-Mohsen Moosavi-Dezfooli
AI4CE
AAML
553
0
0
15 Oct 2024
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff
Arthur Jacot
MLT
85
14
0
30 May 2023
Theoretical Analysis of Inductive Biases in Deep Convolutional Networks
Zihao Wang
Lei Wu
63
22
0
15 May 2023
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
75
27
0
29 Sep 2022
Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Lechao Xiao
Jeffrey Pennington
90
10
0
11 Jul 2022
Understanding robustness and generalization of artificial neural networks through Fourier masks
Nikos Karantzas
E. Besier
J. O. Caro
Xaq Pitkow
A. Tolias
Ankit B. Patel
Fabio Anselmi
OOD
AAML
28
5
0
16 Mar 2022
Learning with convolution and pooling operations in kernel methods
Theodor Misiakiewicz
Song Mei
MLT
67
29
0
16 Nov 2021
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Zhiyuan Li
Tianhao Wang
Sanjeev Arora
MLT
105
102
0
13 Oct 2021
Label Noise SGD Provably Prefers Flat Global Minimizers
Alexandru Damian
Tengyu Ma
Jason D. Lee
NoLa
107
119
0
11 Jun 2021
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
Matthieu Wyart
OOD
58
15
0
06 May 2021
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
98
90
0
25 Feb 2021
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
Zhiyuan Li
Yuping Luo
Kaifeng Lyu
85
129
0
17 Dec 2020
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Zhiyuan Li
Yi Zhang
Sanjeev Arora
BDL
MLT
83
39
0
16 Oct 2020
Prevalence of Neural Collapse during the terminal phase of deep learning training
Vardan Papyan
Xuemei Han
D. Donoho
202
578
0
18 Aug 2020
Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li
Ruosong Wang
Dingli Yu
S. Du
Wei Hu
Ruslan Salakhutdinov
Sanjeev Arora
68
132
0
03 Nov 2019
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
141
1,418
0
01 May 2019
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
226
925
0
26 Apr 2019
A General Theory of Equivariant CNNs on Homogeneous Spaces
Taco S. Cohen
Mario Geiger
Maurice Weiler
MLT
AI4CE
218
314
0
05 Nov 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
267
3,203
0
20 Jun 2018
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
MDE
124
413
0
01 Jun 2018
Universal approximations of invariant maps by neural networks
Dmitry Yarotsky
124
213
0
26 Apr 2018
Characterizing Implicit Bias in Terms of Optimization Geometry
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
AI4CE
73
410
0
22 Feb 2018
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
154
642
0
04 Nov 2016
Group Invariant Scattering
S. Mallat
123
991
0
12 Jan 2011
1