Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.10690
Cited By
SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms
25 October 2018
Zhe Wang
Kaiyi Ji
Yi Zhou
Yingbin Liang
Vahid Tarokh
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms"
18 / 18 papers shown
Title
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems
Zi Xu
Ziqi Wang
Junlin Wang
Y. Dai
18
11
0
24 Nov 2022
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
30
1
0
17 Aug 2022
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
Wei Jiang
Gang Li
Yibo Wang
Lijun Zhang
Tianbao Yang
35
16
0
18 Jul 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
79
17
0
15 Feb 2022
Toward Efficient Online Scheduling for Distributed Machine Learning Systems
Menglu Yu
Jia Liu
Chuan Wu
Bo Ji
Elizabeth S. Bentley
16
6
0
06 Aug 2021
Provably Faster Algorithms for Bilevel Optimization
Junjie Yang
Kaiyi Ji
Yingbin Liang
46
132
0
08 Jun 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
125
0
25 Aug 2020
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters
Kaiyi Ji
J. Lee
Yingbin Liang
H. Vincent Poor
23
74
0
16 Jun 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji
Zhe Wang
Bowen Weng
Yi Zhou
Wei Zhang
Yingbin Liang
ODL
13
5
0
21 Oct 2019
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Pan Xu
F. Gao
Quanquan Gu
28
83
0
18 Sep 2019
Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization
Digvijay Boob
Qi Deng
Guanghui Lan
39
92
0
07 Aug 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Yi Zhou
Junjie Yang
Huishuai Zhang
Yingbin Liang
Vahid Tarokh
14
71
0
02 Jan 2019
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
Junzhe Zhang
Hongyi Zhang
S. Sra
21
39
0
10 Nov 2018
Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
Zhe Wang
Yi Zhou
Yingbin Liang
Guanghui Lan
35
46
0
20 Feb 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
1