Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1708.08694
Cited By
Natasha 2: Faster Non-Convex Optimization Than SGD
29 August 2017
Zeyuan Allen-Zhu
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Natasha 2: Faster Non-Convex Optimization Than SGD"
11 / 11 papers shown
Title
How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD
Zeyuan Allen-Zhu
ODL
68
170
0
08 Jan 2018
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Zeyuan Allen-Zhu
Yuanzhi Li
48
44
0
06 Jan 2017
First Efficient Convergence for Streaming k-PCA: a Global, Gap-Free, and Near-Optimal Rate
Zeyuan Allen-Zhu
Yuanzhi Li
87
100
0
26 Jul 2016
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Zeyuan Allen-Zhu
Yang Yuan
71
197
0
05 Jun 2015
On Graduated Optimization for Stochastic Non-Convex Problems
Elad Hazan
Kfir Y. Levy
Shai Shalev-Shwartz
69
116
0
12 Mar 2015
Simple, Efficient, and Neural Algorithms for Sparse Coding
Sanjeev Arora
Rong Ge
Tengyu Ma
Ankur Moitra
94
193
0
02 Mar 2015
Qualitatively characterizing neural network optimization problems
Ian Goodfellow
Oriol Vinyals
Andrew M. Saxe
ODL
108
522
0
19 Dec 2014
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
131
1,823
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
150
738
0
19 Mar 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
314
1,245
0
10 Sep 2013
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
132
6,626
0
22 Dec 2012
1