Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.01499
Cited By
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
2 March 2021
Tolga Ergen
Arda Sahiner
Batu Mehmet Ozturkler
John M. Pauly
Morteza Mardani
Mert Pilanci
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization"
10 / 10 papers shown
Title
Linguistic Collapse: Neural Collapse in (Large) Language Models
Robert Wu
V. Papyan
48
12
0
28 May 2024
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
94
32
0
29 Apr 2023
Neural Collapse: A Review on Modelling Principles and Generalization
Vignesh Kothapalli
23
71
0
08 Jun 2022
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
77
30
0
02 Feb 2022
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
32
16
0
18 Oct 2021
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
Yifei Wang
Mert Pilanci
MLT
MDE
55
11
0
13 Oct 2021
Parallel Deep Neural Networks Have Zero Duality Gap
Yifei Wang
Tolga Ergen
Mert Pilanci
79
10
0
13 Oct 2021
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
Tolga Ergen
Mert Pilanci
OffRL
MLT
32
32
0
11 Oct 2021
Scaled ReLU Matters for Training Vision Transformers
Pichao Wang
Xue Wang
Haowen Luo
Jingkai Zhou
Zhipeng Zhou
Fan Wang
Hao Li
R. L. Jin
19
41
0
08 Sep 2021
Convex Geometry and Duality of Over-parameterized Neural Networks
Tolga Ergen
Mert Pilanci
MLT
39
54
0
25 Feb 2020
1