ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01146
  4. Cited By
Finding Approximate Local Minima Faster than Gradient Descent

Finding Approximate Local Minima Faster than Gradient Descent

3 November 2016
Naman Agarwal
Zeyuan Allen-Zhu
Brian Bullins
Elad Hazan
Tengyu Ma
ArXivPDFHTML

Papers citing "Finding Approximate Local Minima Faster than Gradient Descent"

22 / 22 papers shown
Title
Restarted Nonconvex Accelerated Gradient Descent: No More
  Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity
Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(ε−7/4)O(ε^{-7/4})O(ε−7/4) Complexity
Huan Li
Zhouchen Lin
42
21
0
27 Jan 2022
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
30
14
0
01 Nov 2021
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z. Yao
A. Gholami
Sheng Shen
Mustafa Mustafa
Kurt Keutzer
Michael W. Mahoney
ODL
39
273
0
01 Jun 2020
Quantum algorithm for finding the negative curvature direction in
  non-convex optimization
Quantum algorithm for finding the negative curvature direction in non-convex optimization
Kaining Zhang
Min-hsiu Hsieh
Liu Liu
Dacheng Tao
13
3
0
17 Sep 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
19
33
0
01 May 2019
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Defending Against Saddle Point Attack in Byzantine-Robust Distributed
  Learning
Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning
Dong Yin
Yudong Chen
Kannan Ramchandran
Peter L. Bartlett
FedML
32
97
0
14 Jun 2018
Towards Riemannian Accelerated Gradient Methods
Towards Riemannian Accelerated Gradient Methods
Hongyi Zhang
S. Sra
13
53
0
07 Jun 2018
Provably convergent acceleration in factored gradient descent with
  applications in matrix sensing
Provably convergent acceleration in factored gradient descent with applications in matrix sensing
Tayo Ajayi
David Mildebrath
Anastasios Kyrillidis
Shashanka Ubaru
Georgios Kollias
K. Bouchard
18
1
0
01 Jun 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Neon2: Finding Local Minima via First-Order Oracles
Neon2: Finding Local Minima via First-Order Oracles
Zeyuan Allen-Zhu
Yuanzhi Li
21
130
0
17 Nov 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
28
245
0
29 Aug 2017
Efficient Regret Minimization in Non-Convex Games
Efficient Regret Minimization in Non-Convex Games
Elad Hazan
Karan Singh
Cyril Zhang
17
94
0
31 Jul 2017
Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex
  Optimization
Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization
Pan Xu
Jinghui Chen
Difan Zou
Quanquan Gu
31
200
0
20 Jul 2017
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
J. Lee
36
415
0
16 Jul 2017
On the Gap Between Strict-Saddles and True Convexity: An Omega(log d)
  Lower Bound for Eigenvector Approximation
On the Gap Between Strict-Saddles and True Convexity: An Omega(log d) Lower Bound for Eigenvector Approximation
Max Simchowitz
A. Alaoui
Benjamin Recht
18
13
0
14 Apr 2017
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified
  Geometric Analysis
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis
Rong Ge
Chi Jin
Yi Zheng
41
433
0
03 Apr 2017
How to Escape Saddle Points Efficiently
How to Escape Saddle Points Efficiently
Chi Jin
Rong Ge
Praneeth Netrapalli
Sham Kakade
Michael I. Jordan
ODL
37
831
0
02 Mar 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
20
80
0
02 Feb 2017
The Power of Normalization: Faster Evasion of Saddle Points
The Power of Normalization: Faster Evasion of Saddle Points
Kfir Y. Levy
22
108
0
15 Nov 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
577
0
18 Mar 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
1