ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.10690
  4. Cited By
SpiderBoost and Momentum: Faster Stochastic Variance Reduction
  Algorithms

SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms

25 October 2018
Zhe Wang
Kaiyi Ji
Yi Zhou
Yingbin Liang
Vahid Tarokh
    ODL
ArXivPDFHTML

Papers citing "SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms"

18 / 18 papers shown
Title
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class
  of Nonconvex-Nonconcave Minimax Problems
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems
Zi Xu
Ziqi Wang
Junlin Wang
Y. Dai
18
11
0
24 Nov 2022
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient
  Method for Distributed Learning in Computing Clusters
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
32
1
0
17 Aug 2022
Multi-block-Single-probe Variance Reduced Estimator for Coupled
  Compositional Optimization
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
Wei Jiang
Gang Li
Yibo Wang
Lijun Zhang
Tianbao Yang
35
16
0
18 Jul 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
79
17
0
15 Feb 2022
Toward Efficient Online Scheduling for Distributed Machine Learning
  Systems
Toward Efficient Online Scheduling for Distributed Machine Learning Systems
Menglu Yu
Jia Liu
Chuan Wu
Bo Ji
Elizabeth S. Bentley
16
6
0
06 Aug 2021
Provably Faster Algorithms for Bilevel Optimization
Provably Faster Algorithms for Bilevel Optimization
Junjie Yang
Kaiyi Ji
Yingbin Liang
46
132
0
08 Jun 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
125
0
25 Aug 2020
Convergence of Meta-Learning with Task-Specific Adaptation over Partial
  Parameters
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters
Kaiyi Ji
J. Lee
Yingbin Liang
H. Vincent Poor
26
74
0
16 Jun 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
History-Gradient Aided Batch Size Adaptation for Variance Reduced
  Algorithms
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji
Zhe Wang
Bowen Weng
Yi Zhou
Wei Zhang
Yingbin Liang
ODL
15
5
0
21 Oct 2019
Sample Efficient Policy Gradient Methods with Recursive Variance
  Reduction
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Pan Xu
F. Gao
Quanquan Gu
28
83
0
18 Sep 2019
Stochastic First-order Methods for Convex and Nonconvex Functional
  Constrained Optimization
Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization
Digvijay Boob
Qi Deng
Guanghui Lan
41
92
0
07 Aug 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite
  Nonconvex Optimization
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Yi Zhou
Junjie Yang
Huishuai Zhang
Yingbin Liang
Vahid Tarokh
14
71
0
02 Jan 2019
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with
  Curvature Independent Rate
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
Junzhe Zhang
Hongyi Zhang
S. Sra
24
39
0
10 Nov 2018
Stochastic Variance-Reduced Cubic Regularization for Nonconvex
  Optimization
Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
Zhe Wang
Yi Zhou
Yingbin Liang
Guanghui Lan
35
46
0
20 Feb 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
1