ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.06398
  4. Cited By
Implicit Regularization in Deep Learning May Not Be Explainable by Norms

Implicit Regularization in Deep Learning May Not Be Explainable by Norms

13 May 2020
Noam Razin
Nadav Cohen
ArXivPDFHTML

Papers citing "Implicit Regularization in Deep Learning May Not Be Explainable by Norms"

38 / 38 papers shown
Title
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
OOD
MLT
59
0
0
11 Apr 2025
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Zhiwei Bai
Jiajie Zhao
Yaoyu Zhang
AI4CE
32
0
0
22 May 2024
In Search of a Data Transformation That Accelerates Neural Field
  Training
In Search of a Data Transformation That Accelerates Neural Field Training
Junwon Seo
Sangyoon Lee
Kwang In Kim
Jaeho Lee
39
3
0
28 Nov 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
60
6
0
24 May 2023
Robust Implicit Regularization via Weight Normalization
Robust Implicit Regularization via Weight Normalization
H. Chou
Holger Rauhut
Rachel A. Ward
28
7
0
09 May 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
32
14
0
02 Mar 2023
Implicit regularization in Heavy-ball momentum accelerated stochastic
  gradient descent
Implicit regularization in Heavy-ball momentum accelerated stochastic gradient descent
Avrajit Ghosh
He Lyu
Xitong Zhang
Rongrong Wang
45
20
0
02 Feb 2023
Generalization on the Unseen, Logic Reasoning and Degree Curriculum
Generalization on the Unseen, Logic Reasoning and Degree Curriculum
Emmanuel Abbe
Samy Bengio
Aryo Lotfi
Kevin Rizk
LRM
39
48
0
30 Jan 2023
Understanding Incremental Learning of Gradient Descent: A Fine-grained
  Analysis of Matrix Sensing
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing
Jikai Jin
Zhiyuan Li
Kaifeng Lyu
S. Du
Jason D. Lee
MLT
48
34
0
27 Jan 2023
On the Ability of Graph Neural Networks to Model Interactions Between
  Vertices
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Noam Razin
Tom Verbin
Nadav Cohen
23
10
0
29 Nov 2022
Infinite-width limit of deep linear neural networks
Infinite-width limit of deep linear neural networks
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
31
14
0
29 Nov 2022
Deep Linear Networks for Matrix Completion -- An Infinite Depth Limit
Deep Linear Networks for Matrix Completion -- An Infinite Depth Limit
Nadav Cohen
Govind Menon
Zsolt Veraszto
ODL
21
7
0
22 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
17
8
0
19 Sep 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
34
72
0
26 Aug 2022
Explicit Use of Fourier Spectrum in Generative Adversarial Networks
Explicit Use of Fourier Spectrum in Generative Adversarial Networks
Soroush Sheikh Gargar
GAN
OOD
29
0
0
02 Aug 2022
Implicit Regularization with Polynomial Growth in Deep Tensor
  Factorization
Implicit Regularization with Polynomial Growth in Deep Tensor Factorization
Kais Hariz
Hachem Kadri
Stéphane Ayache
Maher Moakher
Thierry Artières
26
2
0
18 Jul 2022
Implicit Bias of Gradient Descent on Reparametrized Models: On
  Equivalence to Mirror Descent
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent
Zhiyuan Li
Tianhao Wang
Jason D. Lee
Sanjeev Arora
34
27
0
08 Jul 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
40
132
0
15 Jun 2022
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
37
69
0
14 Jun 2022
On the Effective Number of Linear Regions in Shallow Univariate ReLU
  Networks: Convergence Guarantees and Implicit Bias
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Itay Safran
Gal Vardi
Jason D. Lee
MLT
51
23
0
18 May 2022
The Mechanism of Prediction Head in Non-contrastive Self-supervised
  Learning
The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
Zixin Wen
Yuanzhi Li
SSL
27
34
0
12 May 2022
A Note on Machine Learning Approach for Computational Imaging
A Note on Machine Learning Approach for Computational Imaging
Bin Dong
20
0
0
24 Feb 2022
A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification
  From Analytical Augmented Sample Moments
A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification From Analytical Augmented Sample Moments
Randall Balestriero
Ishan Misra
Yann LeCun
27
20
0
16 Feb 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep
  Convolutional Neural Networks
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
Noam Razin
Asaf Maman
Nadav Cohen
40
29
0
27 Jan 2022
On the Regularization of Autoencoders
On the Regularization of Autoencoders
Harald Steck
Dario Garcia-Garcia
SSL
AI4CE
27
4
0
21 Oct 2021
Implicit Bias of Linear Equivariant Networks
Implicit Bias of Linear Equivariant Networks
Hannah Lawrence
Kristian Georgiev
A. Dienes
B. Kiani
AI4CE
32
14
0
12 Oct 2021
On Margin Maximization in Linear and ReLU Networks
On Margin Maximization in Linear and ReLU Networks
Gal Vardi
Ohad Shamir
Nathan Srebro
47
28
0
06 Oct 2021
The loss landscape of deep linear neural networks: a second-order
  analysis
The loss landscape of deep linear neural networks: a second-order analysis
E. M. Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
22
9
0
28 Jul 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
26
17
0
04 Jul 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
33
74
0
28 Jun 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
29
2
0
26 Feb 2021
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal
  Mirror Descent
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Shahar Azulay
E. Moroshko
Mor Shpigel Nacson
Blake E. Woodworth
Nathan Srebro
Amir Globerson
Daniel Soudry
AI4CE
25
73
0
19 Feb 2021
Rank-One Measurements of Low-Rank PSD Matrices Have Small Feasible Sets
Rank-One Measurements of Low-Rank PSD Matrices Have Small Feasible Sets
T. Roddenberry
Santiago Segarra
Anastasios Kyrillidis
21
0
0
17 Dec 2020
Implicit Bias in Deep Linear Classification: Initialization Scale vs
  Training Accuracy
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
E. Moroshko
Suriya Gunasekar
Blake E. Woodworth
J. Lee
Nathan Srebro
Daniel Soudry
27
85
0
13 Jul 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
34
32
0
18 Jun 2020
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Jeff Z. HaoChen
Colin Wei
J. Lee
Tengyu Ma
29
93
0
15 Jun 2020
To Each Optimizer a Norm, To Each Norm its Generalization
To Each Optimizer a Norm, To Each Norm its Generalization
Sharan Vaswani
Reza Babanezhad
Jose Gallego
Aaron Mishkin
Simon Lacoste-Julien
Nicolas Le Roux
24
8
0
11 Jun 2020
Dropout: Explicit Forms and Capacity Control
Dropout: Explicit Forms and Capacity Control
R. Arora
Peter L. Bartlett
Poorya Mianjy
Nathan Srebro
61
37
0
06 Mar 2020
1