ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.06673
  4. Cited By
Neon2: Finding Local Minima via First-Order Oracles

Neon2: Finding Local Minima via First-Order Oracles

17 November 2017
Zeyuan Allen-Zhu
Yuanzhi Li
ArXivPDFHTML

Papers citing "Neon2: Finding Local Minima via First-Order Oracles"

19 / 19 papers shown
Title
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
67
52
0
12 Feb 2018
First-order Stochastic Algorithms for Escaping From Saddle Points in
  Almost Linear Time
First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time
Yi Tian Xu
Rong Jin
Tianbao Yang
ODL
71
116
0
03 Nov 2017
A Generic Approach for Escaping Saddle points
A Generic Approach for Escaping Saddle points
Sashank J. Reddi
Manzil Zaheer
S. Sra
Barnabás Póczós
Francis R. Bach
Ruslan Salakhutdinov
Alex Smola
100
83
0
05 Sep 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
79
246
0
29 Aug 2017
How to Escape Saddle Points Efficiently
How to Escape Saddle Points Efficiently
Chi Jin
Rong Ge
Praneeth Netrapalli
Sham Kakade
Michael I. Jordan
ODL
224
836
0
02 Mar 2017
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and
  Faster MMWU
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Zeyuan Allen-Zhu
Yuanzhi Li
63
44
0
06 Jan 2017
Finding Approximate Local Minima Faster than Gradient Descent
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal
Zeyuan Allen-Zhu
Brian Bullins
Elad Hazan
Tengyu Ma
88
83
0
03 Nov 2016
Faster Principal Component Regression and Stable Matrix Chebyshev
  Approximation
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation
Zeyuan Allen-Zhu
Yuanzhi Li
51
20
0
16 Aug 2016
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
Zeyuan Allen-Zhu
Yuanzhi Li
64
129
0
12 Jul 2016
Stochastic Variance Reduction for Nonconvex Optimization
Stochastic Variance Reduction for Nonconvex Optimization
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
96
602
0
19 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
115
392
0
17 Mar 2016
SDCA without Duality, Regularization, and Individual Convexity
SDCA without Duality, Regularization, and Individual Convexity
Shai Shalev-Shwartz
39
104
0
04 Feb 2016
Robust Shift-and-Invert Preconditioning: Faster and More Sample
  Efficient Algorithms for Eigenvector Computation
Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation
Chi Jin
Sham Kakade
Cameron Musco
Praneeth Netrapalli
Aaron Sidford
47
43
0
29 Oct 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
  Decomposition
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
137
1,058
0
06 Mar 2015
Qualitatively characterizing neural network optimization problems
Qualitatively characterizing neural network optimization problems
Ian Goodfellow
Oriol Vinyals
Andrew M. Saxe
ODL
108
522
0
19 Dec 2014
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
261
1,198
0
30 Nov 2014
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
133
1,826
0
01 Jul 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
126
1,385
0
10 Jun 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
321
1,248
0
10 Sep 2013
1