Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.00102
Cited By
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
1 March 2017
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient"
50 / 124 papers shown
Title
DSAG: A mixed synchronous-asynchronous iterative method for straggler-resilient learning
A. Severinson
E. Rosnes
S. E. Rouayheb
Alexandre Graell i Amat
22
2
0
27 Nov 2021
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
26
7
0
26 Nov 2021
Distributed Policy Gradient with Variance Reduction in Multi-Agent Reinforcement Learning
Xiaoxiao Zhao
Jinlong Lei
Li Li
Jie-bin Chen
OffRL
18
2
0
25 Nov 2021
Federated Expectation Maximization with heterogeneity mitigation and variance reduction
Aymeric Dieuleveut
G. Fort
Eric Moulines
Geneviève Robin
FedML
31
5
0
03 Nov 2021
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima
Zixiang Chen
Dongruo Zhou
Quanquan Gu
43
1
0
25 Oct 2021
Nys-Newton: Nyström-Approximated Curvature for Stochastic Optimization
Dinesh Singh
Hardik Tankaria
M. Yamada
ODL
44
2
0
16 Oct 2021
Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
Gen Li
Laixi Shi
Yuxin Chen
Yuejie Chi
OffRL
45
51
0
09 Oct 2021
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
10
0
04 Aug 2021
Enhanced Bilevel Optimization via Bregman Distance
Feihu Huang
Junyi Li
Shangqian Gao
Heng-Chiao Huang
27
33
0
26 Jul 2021
A general sample complexity analysis of vanilla policy gradient
Rui Yuan
Robert Mansel Gower
A. Lazaric
79
62
0
23 Jul 2021
Provably Faster Algorithms for Bilevel Optimization
Junjie Yang
Kaiyi Ji
Yingbin Liang
51
132
0
08 Jun 2021
Randomized Stochastic Variance-Reduced Methods for Multi-Task Stochastic Bilevel Optimization
Zhishuai Guo
Quan Hu
Lijun Zhang
Tianbao Yang
61
30
0
05 May 2021
GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning
Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth S. Bentley
49
14
0
04 May 2021
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity
Shaocong Ma
Ziyi Chen
Yi Zhou
Shaofeng Zou
17
11
0
30 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization
Ran Xin
U. Khan
S. Kar
20
39
0
12 Feb 2021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Haishan Ye
Wei Xiong
Tong Zhang
16
16
0
30 Dec 2020
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
40
82
0
07 Dec 2020
A Stochastic Path-Integrated Differential EstimatoR Expectation Maximization Algorithm
G. Fort
Eric Moulines
Hoi-To Wai
TPM
21
7
0
30 Nov 2020
SMG: A Shuffling Gradient-Based Method with Momentum
Trang H. Tran
Lam M. Nguyen
Quoc Tran-Dinh
23
21
0
24 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
37
109
0
03 Nov 2020
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark W. Schmidt
Francis R. Bach
Peter Richtárik
19
111
0
02 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
37
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
126
0
25 Aug 2020
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
48
81
0
25 Aug 2020
Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
17
3
0
25 Jul 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
39
50
0
08 Jul 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
90
53
0
24 Jun 2020
An Online Method for A Class of Distributionally Robust Optimization with Non-Convex Objectives
Qi Qi
Zhishuai Guo
Yi Tian Xu
Rong Jin
Tianbao Yang
33
44
0
17 Jun 2020
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
Majid Jahani
M. Nazari
R. Tappenden
A. Berahas
Martin Takávc
ODL
19
10
0
06 Jun 2020
Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
Yangyang Xu
Yibo Xu
28
23
0
31 May 2020
Stochastic Recursive Momentum for Policy Gradient Methods
Huizhuo Yuan
Xiangru Lian
Ji Liu
Yuren Zhou
21
31
0
09 Mar 2020
Flexible numerical optimization with ensmallen
Ryan R. Curtin
Marcus Edel
Rahul Prabhu
S. Basak
Zhihao Lou
Conrad Sanderson
18
1
0
09 Mar 2020
Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning
Yifan Hu
Siqi Zhang
Xin Chen
Niao He
ODL
41
55
0
25 Feb 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
39
78
0
19 Feb 2020
Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization
Quoc Tran-Dinh
Nhan H. Pham
Lam M. Nguyen
27
22
0
17 Feb 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
19
10
0
13 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
Adaptive Stochastic Optimization
Frank E. Curtis
K. Scheinberg
ODL
16
29
0
18 Jan 2020
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
42
152
0
30 Dec 2019
Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Zhaoxian Wu
Qing Ling
Tianyi Chen
G. Giannakis
FedML
AAML
32
181
0
29 Dec 2019
Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling
Mojmír Mutný
Michal Derezinski
Andreas Krause
38
21
0
25 Oct 2019
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji
Zhe Wang
Bowen Weng
Yi Zhou
Wei Zhang
Yingbin Liang
ODL
18
5
0
21 Oct 2019
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Pan Xu
F. Gao
Quanquan Gu
31
83
0
18 Sep 2019
Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization
Digvijay Boob
Qi Deng
Guanghui Lan
52
92
0
07 Aug 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
Previous
1
2
3
Next