ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.17499
  4. Cited By
The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
  Networks

The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks

30 June 2023
Mor Shpigel Nacson
Rotem Mulayoff
Greg Ongie
T. Michaeli
Daniel Soudry
ArXivPDFHTML

Papers citing "The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks"

9 / 9 papers shown
Title
Slowing Down Forgetting in Continual Learning
Slowing Down Forgetting in Continual Learning
Pascal Janetzky
Tobias Schlagenhauf
Stefan Feuerriegel
CLL
41
0
0
11 Nov 2024
Where Do Large Learning Rates Lead Us?
Where Do Large Learning Rates Lead Us?
Ildus Sadrtdinov
M. Kodryan
Eduard Pokonechny
E. Lobacheva
Dmitry Vetrov
AI4CE
41
0
0
29 Oct 2024
Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization
  by Large Step Sizes
Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes
Dan Qiao
Kaiqi Zhang
Esha Singh
Daniel Soudry
Yu-Xiang Wang
NoLa
41
3
0
10 Jun 2024
Analyzing Neural Network-Based Generative Diffusion Models through
  Convex Optimization
Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization
Fangzhao Zhang
Mert Pilanci
DiffM
54
3
0
03 Feb 2024
How do Minimum-Norm Shallow Denoisers Look in Function Space?
How do Minimum-Norm Shallow Denoisers Look in Function Space?
Chen Zeno
Greg Ongie
Yaniv Blumenfeld
Nir Weinberger
Daniel Soudry
28
8
0
12 Nov 2023
Exact Mean Square Linear Stability Analysis for SGD
Exact Mean Square Linear Stability Analysis for SGD
Rotem Mulayoff
T. Michaeli
MLT
28
1
0
13 Jun 2023
Sharpness-Aware Minimization Leads to Low-Rank Features
Sharpness-Aware Minimization Leads to Low-Rank Features
Maksym Andriushchenko
Dara Bahri
H. Mobahi
Nicolas Flammarion
AAML
25
25
0
25 May 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
68
6
0
24 May 2023
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1