ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.01944
  4. Cited By
First-order Stochastic Algorithms for Escaping From Saddle Points in
  Almost Linear Time

First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time

3 November 2017
Yi Tian Xu
Rong Jin
Tianbao Yang
    ODL
ArXivPDFHTML

Papers citing "First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time"

27 / 27 papers shown
Title
Comparisons Are All You Need for Optimizing Smooth Functions
Comparisons Are All You Need for Optimizing Smooth Functions
Chenyi Zhang
Tongyang Li
AAML
37
1
0
19 May 2024
Private (Stochastic) Non-Convex Optimization Revisited: Second-Order
  Stationary Points and Excess Risks
Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks
Arun Ganesh
Daogao Liu
Sewoong Oh
Abhradeep Thakurta
ODL
34
13
0
20 Feb 2023
Variance-Reduced Conservative Policy Iteration
Variance-Reduced Conservative Policy Iteration
Naman Agarwal
Brian Bullins
Karan Singh
32
3
0
12 Dec 2022
Escaping From Saddle Points Using Asynchronous Coordinate Gradient
  Descent
Escaping From Saddle Points Using Asynchronous Coordinate Gradient Descent
Marco Bornstein
Jin-Peng Liu
Jingling Li
Furong Huang
21
0
0
17 Nov 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
31
8
0
18 Feb 2022
Restarted Nonconvex Accelerated Gradient Descent: No More
  Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity
Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(ε−7/4)O(ε^{-7/4})O(ε−7/4) Complexity
Huan Li
Zhouchen Lin
42
21
0
27 Jan 2022
Escape saddle points by a simple gradient-descent based algorithm
Escape saddle points by a simple gradient-descent based algorithm
Chenyi Zhang
Tongyang Li
ODL
31
15
0
28 Nov 2021
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima
Zixiang Chen
Dongruo Zhou
Quanquan Gu
43
1
0
25 Oct 2021
Escaping Saddle Points with Compressed SGD
Escaping Saddle Points with Compressed SGD
Dmitrii Avdiukhin
G. Yaroslavtsev
22
4
0
21 May 2021
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex
  Optimization
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
16
7
0
04 Oct 2020
Alternating Direction Method of Multipliers for Quantization
Alternating Direction Method of Multipliers for Quantization
Tianjian Huang
Prajwal Singhania
Maziar Sanjabi
Pabitra Mitra
Meisam Razaviyayn
MQ
24
10
0
08 Sep 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power
  and Limitations
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
90
53
0
24 Jun 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
27
168
0
19 Dec 2019
SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently
  for Non-convex Linearly Constrained Problems
SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems
Songtao Lu
Meisam Razaviyayn
Bo Yang
Kejun Huang
Mingyi Hong
27
11
0
09 Jul 2019
Combining Stochastic Adaptive Cubic Regularization with Negative
  Curvature for Nonconvex Optimization
Combining Stochastic Adaptive Cubic Regularization with Negative Curvature for Nonconvex Optimization
Seonho Park
Seung Hyun Jung
P. Pardalos
ODL
29
15
0
27 Jun 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
19
34
0
01 May 2019
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Haowei He
Gao Huang
Yang Yuan
ODL
MLT
28
148
0
02 Feb 2019
Escaping Saddle Points with Adaptive Gradient Methods
Escaping Saddle Points with Adaptive Gradient Methods
Matthew Staib
Sashank J. Reddi
Satyen Kale
Sanjiv Kumar
S. Sra
ODL
14
73
0
26 Jan 2019
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
50
571
0
04 Jul 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Defending Against Saddle Point Attack in Byzantine-Robust Distributed
  Learning
Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning
Dong Yin
Yudong Chen
Kannan Ramchandran
Peter L. Bartlett
FedML
32
98
0
14 Jun 2018
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
27
361
0
05 Jun 2018
Local Saddle Point Optimization: A Curvature Exploitation Approach
Local Saddle Point Optimization: A Curvature Exploitation Approach
Leonard Adolphs
Hadi Daneshmand
Aurelien Lucchi
Thomas Hofmann
37
107
0
15 May 2018
Escaping Saddles with Stochastic Gradients
Escaping Saddles with Stochastic Gradients
Hadi Daneshmand
Jonas Köhler
Aurelien Lucchi
Thomas Hofmann
27
162
0
15 Mar 2018
NEON+: Accelerated Gradient Methods for Extracting Negative Curvature
  for Non-Convex Optimization
NEON+: Accelerated Gradient Methods for Extracting Negative Curvature for Non-Convex Optimization
Yi Tian Xu
Rong Jin
Tianbao Yang
35
25
0
04 Dec 2017
Neon2: Finding Local Minima via First-Order Oracles
Neon2: Finding Local Minima via First-Order Oracles
Zeyuan Allen-Zhu
Yuanzhi Li
21
130
0
17 Nov 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
28
245
0
29 Aug 2017
1