ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03175
  4. Cited By
Recovery Guarantees for One-hidden-layer Neural Networks

Recovery Guarantees for One-hidden-layer Neural Networks

10 June 2017
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
    MLT
ArXivPDFHTML

Papers citing "Recovery Guarantees for One-hidden-layer Neural Networks"

50 / 223 papers shown
Title
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
29
37
0
28 Dec 2018
Towards a Theoretical Understanding of Hashing-Based Neural Nets
Towards a Theoretical Understanding of Hashing-Based Neural Nets
Yibo Lin
Zhao Song
Lin F. Yang
14
5
0
26 Dec 2018
Overparameterized Nonlinear Learning: Gradient Descent Takes the
  Shortest Path?
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
6
176
0
25 Dec 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
9
765
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
22
1,447
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
44
1,122
0
09 Nov 2018
Learning Two Layer Rectified Neural Networks in Polynomial Time
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
13
69
0
05 Nov 2018
Towards a Zero-One Law for Column Subset Selection
Towards a Zero-One Law for Column Subset Selection
Zhao Song
David P. Woodruff
Peilin Zhong
21
32
0
04 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
23
191
0
29 Oct 2018
Subgradient Descent Learns Orthogonal Dictionaries
Subgradient Descent Learns Orthogonal Dictionaries
Yu Bai
Qijia Jiang
Ju Sun
20
51
0
25 Oct 2018
Depth with Nonlinearity Creates No Bad Local Minima in ResNets
Depth with Nonlinearity Creates No Bad Local Minima in ResNets
Kenji Kawaguchi
Yoshua Bengio
ODL
17
64
0
21 Oct 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
26
117
0
17 Oct 2018
Learning Two-layer Neural Networks with Symmetric Inputs
Learning Two-layer Neural Networks with Symmetric Inputs
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
36
57
0
16 Oct 2018
Learning One-hidden-layer Neural Networks under General Input
  Distributions
Learning One-hidden-layer Neural Networks under General Input Distributions
Weihao Gao
Ashok Vardhan Makkuva
Sewoong Oh
Pramod Viswanath
MLT
25
28
0
09 Oct 2018
A Convergence Analysis of Gradient Descent for Deep Linear Neural
  Networks
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
Sanjeev Arora
Nadav Cohen
Noah Golowich
Wei Hu
27
281
0
04 Oct 2018
Efficiently testing local optimality and escaping saddles for ReLU
  networks
Efficiently testing local optimality and escaping saddles for ReLU networks
Chulhee Yun
S. Sra
Ali Jadbabaie
30
10
0
28 Sep 2018
On the loss landscape of a class of deep neural networks with no bad
  local valleys
On the loss landscape of a class of deep neural networks with no bad local valleys
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
16
87
0
27 Sep 2018
Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
Yuejie Chi
Yue M. Lu
Yuxin Chen
39
416
0
25 Sep 2018
Stochastic Gradient Descent Learns State Equations with Nonlinear
  Activations
Stochastic Gradient Descent Learns State Equations with Nonlinear Activations
Samet Oymak
16
43
0
09 Sep 2018
Two Dimensional Stochastic Configuration Networks for Image Data
  Analytics
Two Dimensional Stochastic Configuration Networks for Image Data Analytics
Ming Li
Dianhui Wang
14
2
0
06 Sep 2018
On the Decision Boundary of Deep Neural Networks
On the Decision Boundary of Deep Neural Networks
Yu Li
Lizhong Ding
Xin Gao
UQCV
6
35
0
16 Aug 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm,
  Optimality, and Generalization
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
24
131
0
14 Aug 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
34
650
0
03 Aug 2018
Tensor Methods for Additive Index Models under Discordance and
  Heterogeneity
Tensor Methods for Additive Index Models under Discordance and Heterogeneity
Krishnakumar Balasubramanian
Jianqing Fan
Zhuoran Yang
19
13
0
17 Jul 2018
Model Reconstruction from Model Explanations
Model Reconstruction from Model Explanations
S. Milli
Ludwig Schmidt
Anca Dragan
Moritz Hardt
FAtt
21
177
0
13 Jul 2018
Learning ReLU Networks via Alternating Minimization
Learning ReLU Networks via Alternating Minimization
Gauri Jagatap
C. Hegde
22
11
0
20 Jun 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
28
134
0
20 Jun 2018
Algorithmic Regularization in Learning Deep Homogeneous Models: Layers
  are Automatically Balanced
Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced
S. Du
Wei Hu
J. Lee
MLT
26
236
0
04 Jun 2018
Autoencoders Learn Generative Linear Models
Autoencoders Learn Generative Linear Models
THANH VAN NGUYEN
Raymond K. W. Wong
C. Hegde
DRL
11
4
0
02 Jun 2018
Nonlinear Inductive Matrix Completion based on One-layer Neural Networks
Nonlinear Inductive Matrix Completion based on One-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Inderjit S. Dhillon
9
6
0
26 May 2018
A Unified Framework for Training Neural Networks
A Unified Framework for Training Neural Networks
H. Ghauch
H. S. Ghadikolaei
Carlo Fischione
Mikael Skoglund
AI4CE
16
0
0
23 May 2018
Adding One Neuron Can Eliminate All Bad Local Minima
Adding One Neuron Can Eliminate All Bad Local Minima
Shiyu Liang
Ruoyu Sun
J. Lee
R. Srikant
37
89
0
22 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
21
57
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
30
17
0
20 May 2018
End-to-end Learning of a Convolutional Neural Network via Deep Tensor
  Decomposition
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Samet Oymak
Mahdi Soltanolkotabi
21
12
0
16 May 2018
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial
  Convergence and SQ Lower Bounds
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala
John Wilmes
MLT
6
50
0
07 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
43
850
0
18 Apr 2018
A Provably Correct Algorithm for Deep Learning that Actually Works
A Provably Correct Algorithm for Deep Learning that Actually Works
Eran Malach
Shai Shalev-Shwartz
MLT
8
30
0
26 Mar 2018
On the Power of Over-parametrization in Neural Networks with Quadratic
  Activation
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
S. Du
J. Lee
27
267
0
03 Mar 2018
Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient
  Algorithms
Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient Algorithms
Ashok Vardhan Makkuva
Sewoong Oh
Sreeram Kannan
Pramod Viswanath
MoE
8
18
0
21 Feb 2018
On the Connection Between Learning Two-Layers Neural Networks and Tensor
  Decomposition
On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition
Marco Mondelli
Andrea Montanari
MLT
CML
10
58
0
20 Feb 2018
Understanding the Loss Surface of Neural Networks for Binary
  Classification
Understanding the Loss Surface of Neural Networks for Binary Classification
Shiyu Liang
Ruoyu Sun
Yixuan Li
R. Srikant
21
87
0
19 Feb 2018
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross
  Entropy
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
H. Fu
Yuejie Chi
Yingbin Liang
FedML
27
39
0
18 Feb 2018
Nonconvex Matrix Factorization from Rank-One Measurements
Nonconvex Matrix Factorization from Rank-One Measurements
Yuanxin Li
Cong Ma
Yuxin Chen
Yuejie Chi
25
51
0
17 Feb 2018
Gradient descent with identity initialization efficiently learns
  positive definite linear transformations by deep residual networks
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter L. Bartlett
D. Helmbold
Philip M. Long
33
116
0
16 Feb 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex
  Optimization
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
39
116
0
13 Feb 2018
Small nonlinearities in activation functions create bad local minima in
  neural networks
Small nonlinearities in activation functions create bad local minima in neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
ODL
20
93
0
10 Feb 2018
Learning One Convolutional Layer with Overlapping Patches
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
16
80
0
07 Feb 2018
Learning Compact Neural Networks with Regularization
Learning Compact Neural Networks with Regularization
Samet Oymak
MLT
41
39
0
05 Feb 2018
The Multilinear Structure of ReLU Networks
The Multilinear Structure of ReLU Networks
T. Laurent
J. V. Brecht
25
51
0
29 Dec 2017
Previous
12345
Next