ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.09072
  4. Cited By
On the Effective Number of Linear Regions in Shallow Univariate ReLU
  Networks: Convergence Guarantees and Implicit Bias

On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

18 May 2022
Itay Safran
Gal Vardi
Jason D. Lee
    MLT
ArXivPDFHTML

Papers citing "On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias"

21 / 21 papers shown
Title
Trained Transformer Classifiers Generalize and Exhibit Benign
  Overfitting In-Context
Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Context
Spencer Frei
Gal Vardi
MLT
28
3
0
02 Oct 2024
Symmetry & Critical Points
Symmetry & Critical Points
Yossi Arjevani
28
2
0
26 Aug 2024
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data
Nikita Tsoy
Nikola Konstantinov
37
4
0
27 May 2024
Disentangle Sample Size and Initialization Effect on Perfect
  Generalization for Single-Neuron Target
Disentangle Sample Size and Initialization Effect on Perfect Generalization for Single-Neuron Target
Jiajie Zhao
Zhiwei Bai
Yaoyu Zhang
28
0
0
22 May 2024
Directional Convergence Near Small Initializations and Saddles in
  Two-Homogeneous Neural Networks
Directional Convergence Near Small Initializations and Saddles in Two-Homogeneous Neural Networks
Akshay Kumar
Jarvis D. Haupt
ODL
30
7
0
14 Feb 2024
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity,
  Sharpness, and Feature Learning
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning
Nikhil Ghosh
Spencer Frei
Wooseok Ha
Ting Yu
MLT
32
3
0
06 Aug 2023
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Nirmit Joshi
Gal Vardi
Nathan Srebro
32
8
0
28 Jul 2023
From Tempered to Benign Overfitting in ReLU Neural Networks
From Tempered to Benign Overfitting in ReLU Neural Networks
Guy Kornowski
Gilad Yehudai
Ohad Shamir
20
12
0
24 May 2023
Leveraging the two timescale regime to demonstrate convergence of neural
  networks
Leveraging the two timescale regime to demonstrate convergence of neural networks
P. Marion
Raphael Berthier
34
5
0
19 Apr 2023
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
  KKT Conditions for Margin Maximization
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
30
22
0
02 Mar 2023
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness
  in ReLU Networks
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
34
17
0
02 Mar 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
34
14
0
02 Mar 2023
Testing Stationarity Concepts for ReLU Networks: Hardness, Regularity,
  and Robust Algorithms
Testing Stationarity Concepts for ReLU Networks: Hardness, Regularity, and Robust Algorithms
Lai Tian
Anthony Man-Cho So
33
2
0
23 Feb 2023
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You
  Think
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You Think
Christian H. X. Ali Mehmeti-Göpel
Jan Disselhoff
13
5
0
30 Nov 2022
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
Wei Hu
MLT
30
38
0
13 Oct 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
34
72
0
26 Aug 2022
Implicit Regularization Towards Rank Minimization in ReLU Networks
Implicit Regularization Towards Rank Minimization in ReLU Networks
Nadav Timor
Gal Vardi
Ohad Shamir
26
49
0
30 Jan 2022
On Margin Maximization in Linear and ReLU Networks
On Margin Maximization in Linear and ReLU Networks
Gal Vardi
Ohad Shamir
Nathan Srebro
50
28
0
06 Oct 2021
Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest
  Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz
  Functions
Ridgeless Interpolation with Shallow ReLU Networks in 1D1D1D is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions
Boris Hanin
MLT
35
9
0
27 Sep 2021
Continuous vs. Discrete Optimization of Deep Neural Networks
Continuous vs. Discrete Optimization of Deep Neural Networks
Omer Elkabetz
Nadav Cohen
65
44
0
14 Jul 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
74
44
0
04 Feb 2021
1