Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.12412
Cited By
A unified variance-reduced accelerated gradient method for convex optimization
29 May 2019
Guanghui Lan
Zhize Li
Yi Zhou
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A unified variance-reduced accelerated gradient method for convex optimization"
16 / 16 papers shown
Title
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
47
4
0
10 Jan 2024
Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise
Zijian Liu
Jiawei Zhang
Zhengyuan Zhou
39
12
0
14 Feb 2023
Near-Optimal Non-Convex Stochastic Optimization under Generalized Smoothness
Zijian Liu
Srikanth Jagabathula
Zhengyuan Zhou
29
5
0
13 Feb 2023
Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization
Ali Kavis
Stratis Skoulakis
Kimon Antonakopoulos
L. Dadi
V. Cevher
29
15
0
03 Nov 2022
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
65
11
0
17 Jun 2022
Distributionally Robust Optimization via Ball Oracle Acceleration
Y. Carmon
Danielle Hausler
23
12
0
24 Mar 2022
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
29
1
0
30 Sep 2021
An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm for First-order and Zeroth-order Optimization
Xiyuan Wei
Bin Gu
Heng-Chiao Huang
34
1
0
18 Sep 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
48
14
0
21 Mar 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark Schmidt
Simon Lacoste-Julien
23
18
0
18 Feb 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
33
126
0
25 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
29
135
0
26 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
On the Adaptivity of Stochastic Gradient-Based Optimization
Lihua Lei
Michael I. Jordan
ODL
21
22
0
09 Apr 2019
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1