Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.07966
Cited By
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
26 February 2017
Alon Brutzkus
Amir Globerson
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs"
50 / 87 papers shown
Title
Mean of Means: A 10-dollar Solution for Human Localization with Calibration-free and Unconstrained Camera Settings
Tianyi Zhang
Wengyu Zhang
Xu-Lu Zhang
Jiaxin Wu
Xiao Wei
Jiannong Cao
Qing Li
37
0
0
28 Jan 2025
Near-Optimal Solutions of Constrained Learning Problems
Juan Elenter
Luiz F. O. Chamon
Alejandro Ribeiro
26
5
0
18 Mar 2024
Resilient Constrained Learning
Ignacio Hounie
Alejandro Ribeiro
Luiz F. O. Chamon
29
10
0
04 Jun 2023
Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
Weihang Xu
S. Du
37
16
0
20 Feb 2023
Annihilation of Spurious Minima in Two-Layer ReLU Networks
Yossi Arjevani
M. Field
16
8
0
12 Oct 2022
Towards Theoretically Inspired Neural Initialization Optimization
Yibo Yang
Hong Wang
Haobo Yuan
Zhouchen Lin
23
9
0
12 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
96
6
0
27 Sep 2022
Implicit Full Waveform Inversion with Deep Neural Representation
Jian Sun
K. Innanen
AI4CE
40
32
0
08 Sep 2022
Local Identifiability of Deep ReLU Neural Networks: the Theory
Joachim Bona-Pellissier
Franccois Malgouyres
François Bachoc
FAtt
67
6
0
15 Jun 2022
The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
Zixin Wen
Yuanzhi Li
SSL
32
34
0
12 May 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
61
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
41
22
0
21 Jan 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
25
34
0
20 Jan 2022
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Zohar Ringel
35
51
0
31 Dec 2021
Parameter identifiability of a deep feedforward ReLU neural network
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
41
15
0
24 Dec 2021
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
30
31
0
02 Nov 2021
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
32
16
0
18 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
Tolga Ergen
Mert Pilanci
OffRL
MLT
32
33
0
11 Oct 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
28
18
0
21 Jul 2021
Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
Zixin Wen
Yuanzhi Li
SSL
MLT
32
131
0
31 May 2021
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training
Yatong Bai
Tanmay Gautam
Yujie Gai
Somayeh Sojoudi
AAML
27
3
0
25 May 2021
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Yuandong Tian
Xinlei Chen
Surya Ganguli
SSL
138
281
0
12 Feb 2021
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
Rong Jin
39
3
0
12 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
60
355
0
17 Dec 2020
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
24
26
0
03 Oct 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
J. Lee
Ruoqi Shen
Zhao Song
Mengdi Wang
Zheng Yu
21
42
0
21 Sep 2020
Nonparametric Learning of Two-Layer ReLU Residual Units
Zhunxuan Wang
Linyun He
Chunchuan Lyu
Shay B. Cohen
MLT
OffRL
33
1
0
17 Aug 2020
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel
Adam R. Klivans
Frederic Koehler
19
5
0
25 Jul 2020
Probably Approximately Correct Constrained Learning
Luiz F. O. Chamon
Alejandro Ribeiro
22
37
0
09 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
37
147
0
20 May 2020
Symmetry & critical points for a model shallow neural network
Yossi Arjevani
M. Field
34
13
0
23 Mar 2020
An Optimization and Generalization Analysis for Max-Pooling Networks
Alon Brutzkus
Amir Globerson
MLT
AI4CE
16
4
0
22 Feb 2020
Convergence of End-to-End Training in Deep Unsupervised Contrastive Learning
Zixin Wen
SSL
21
2
0
17 Feb 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
35
19
0
31 Dec 2019
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
25
168
0
19 Dec 2019
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Reinhard Heckel
Mahdi Soltanolkotabi
DiffM
35
81
0
31 Oct 2019
The Local Elasticity of Neural Networks
Hangfeng He
Weijie J. Su
40
44
0
15 Oct 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
31
161
0
25 Aug 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
16
33
0
23 May 2019
Fine-grained Optimization of Deep Neural Networks
Mete Ozay
ODL
14
1
0
22 May 2019
Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning
Kenji Kawaguchi
Jiaoyang Huang
L. Kaelbling
AAML
24
18
0
07 Apr 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
55
961
0
24 Jan 2019
1
2
Next