ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01447
  4. Cited By
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full
  Gradient Computation

ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation

2 March 2021
Zhize Li
Slavomír Hanzely
Peter Richtárik
ArXivPDFHTML

Papers citing "ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation"

13 / 13 papers shown
Title
A Short Note of PAGE: Optimal Convergence Rates for Nonconvex
  Optimization
A Short Note of PAGE: Optimal Convergence Rates for Nonconvex Optimization
Zhize Li
49
5
0
17 Jun 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
74
128
0
25 Aug 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex
  Federated Optimization
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
78
36
0
12 Jun 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
104
30
0
13 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
74
183
0
09 Feb 2020
Improving the Sample and Communication Complexity for Decentralized
  Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach
Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach
Haoran Sun
Songtao Lu
Mingyi Hong
56
37
0
13 Oct 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
53
27
0
29 May 2019
Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex
  Optimization
Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
Dzung Phan
Lam M. Nguyen
59
56
0
15 May 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
49
34
0
01 May 2019
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
89
577
0
04 Jul 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex
  Optimization
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
62
116
0
13 Feb 2018
Non-convex Optimization for Machine Learning
Non-convex Optimization for Machine Learning
Prateek Jain
Purushottam Kar
155
486
0
21 Dec 2017
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
120
1,550
0
22 Sep 2013
1