Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.10427
Cited By
The Low-Rank Simplicity Bias in Deep Networks
18 March 2021
Minyoung Huh
H. Mobahi
Richard Y. Zhang
Brian Cheung
Pulkit Agrawal
Phillip Isola
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Low-Rank Simplicity Bias in Deep Networks"
32 / 32 papers shown
Title
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis
Richard Klein
Benjamin Rosman
Andrew M. Saxe
MLT
66
1
0
08 Mar 2025
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
Z. Liu
Ruijie Zhang
Zihan Wang
Zi Yang
Paul Hovland
Bogdan Nicolae
Franck Cappello
Z. Zhang
49
0
0
16 Feb 2025
Sebra: Debiasing Through Self-Guided Bias Ranking
Adarsh Kappiyath
Abhra Chaudhuri
Ajay Jaiswal
Ziquan Liu
Yunpeng Li
Xiatian Zhu
L. Yin
CML
137
1
1
30 Jan 2025
Self-Assembly of a Biologically Plausible Learning Circuit
Q. Liao
Liu Ziyin
Yulu Gan
Brian Cheung
Mark Harnett
Tomaso Poggio
52
0
0
31 Dec 2024
SHAP values via sparse Fourier representation
Ali Gorji
Andisheh Amrollahi
A. Krause
FAtt
38
0
0
08 Oct 2024
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras
Peng Wang
Laura Balzano
Qing Qu
AI4CE
37
12
0
06 Jun 2024
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport
Jeonghwan Cheon
Sang Wan Lee
Se-Bum Paik
OOD
185
1
0
27 May 2024
Learned feature representations are biased by complexity, learning order, position, and more
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Katherine Hermann
AI4CE
FaML
SSL
OOD
37
6
0
09 May 2024
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
Xitong Zhang
Ismail R. Alkhouri
Rongrong Wang
41
0
0
06 May 2024
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
103
18
0
04 Mar 2024
The Expected Loss of Preconditioned Langevin Dynamics Reveals the Hessian Rank
Amitay Bar
Rotem Mulayoff
T. Michaeli
Ronen Talmon
64
0
0
21 Feb 2024
Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale Fine-Grained Image Retrieval
Xiu-Shen Wei
Yang Shen
Xuhao Sun
Peng Wang
Yuxin Peng
22
10
0
21 Nov 2023
Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks
Woojin Cho
Kookjin Lee
Donsub Rim
Noseong Park
AI4CE
PINN
37
16
0
14 Oct 2023
Robust low-rank training via approximate orthonormal constraints
Dayana Savostianova
Emanuele Zangrando
Gianluca Ceruti
Francesco Tudisco
24
9
0
02 Jun 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
65
6
0
24 May 2023
Do deep neural networks have an inbuilt Occam's razor?
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
21
16
0
13 Apr 2023
Delving Deep into Simplicity Bias for Long-Tailed Image Recognition
Xiu-Shen Wei
Xuhao Sun
Yang Shen
Anqi Xu
Peng Wang
Faen Zhang
36
1
0
07 Feb 2023
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
34
13
0
21 Oct 2022
Learning Less Generalizable Patterns with an Asymmetrically Trained Double Classifier for Better Test-Time Adaptation
Thomas Duboudin
Emmanuel Dellandrea
Corentin Abgrall
Gilles Hénaff
Limin Chen
TTA
27
1
0
17 Oct 2022
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
34
25
0
09 Jun 2022
Machine Learning and Deep Learning -- A review for Ecologists
Maximilian Pichler
F. Hartig
45
127
0
11 Apr 2022
On the Origins of the Block Structure Phenomenon in Neural Network Representations
Thao Nguyen
M. Raghu
Simon Kornblith
25
14
0
15 Feb 2022
Neural Fields in Visual Computing and Beyond
Yiheng Xie
Towaki Takikawa
Shunsuke Saito
Or Litany
Shiqin Yan
Numair Khan
Federico Tombari
James Tompkin
Vincent Sitzmann
Srinath Sridhar
3DH
49
615
0
22 Nov 2021
In Search of Probeable Generalization Measures
Jonathan Jaegerman
Khalil Damouni
M. M. Ankaralı
Konstantinos N. Plataniotis
24
2
0
23 Oct 2021
Visualizing the embedding space to explain the effect of knowledge distillation
Hyun Seung Lee
C. Wallraven
12
1
0
09 Oct 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
Can contrastive learning avoid shortcut solutions?
Joshua Robinson
Li Sun
Ke Yu
Kayhan Batmanghelich
Stefanie Jegelka
S. Sra
SSL
19
142
0
21 Jun 2021
Layer Folding: Neural Network Depth Reduction using Activation Linearization
Amir Ben Dror
Niv Zehngut
Avraham Raviv
E. Artyomov
Ran Vitek
R. Jevnisek
29
20
0
17 Jun 2021
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,225
0
16 Nov 2016
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,327
0
05 Nov 2016
1