Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
1603.05953
Cited By
v1
v2
v3
v4
v5
v6 (latest)
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
18 March 2016
Zeyuan Allen-Zhu
ODL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Katyusha: The First Direct Acceleration of Stochastic Gradient Methods"
50 / 192 papers shown
Title
Stochastic Bias-Reduced Gradient Methods
Hilal Asi
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
76
30
0
17 Jun 2021
Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models
Courtney Paquette
Elliot Paquette
ODL
102
14
0
07 Jun 2021
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
74
10
0
25 May 2021
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality
Jonathan Lacotte
Yifei Wang
Mert Pilanci
67
17
0
15 May 2021
Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
71
20
0
04 May 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
117
14
0
21 Mar 2021
Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
142
17
0
26 Feb 2021
Machine Unlearning via Algorithmic Stability
Enayat Ullah
Tung Mai
Anup B. Rao
Ryan Rossi
R. Arora
107
111
0
25 Feb 2021
Learning with User-Level Privacy
Daniel Levy
Ziteng Sun
Kareem Amin
Satyen Kale
Alex Kulesza
M. Mohri
A. Suresh
FedML
130
91
0
23 Feb 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark Schmidt
Simon Lacoste-Julien
61
18
0
18 Feb 2021
Stochastic Variance Reduction for Variational Inequality Methods
Ahmet Alacaoglu
Yura Malitsky
105
71
0
16 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
50
24
0
14 Feb 2021
Complementary Composite Minimization, Small Gradients in General Norms, and Applications
Jelena Diakonikolas
Cristóbal Guzmán
46
14
0
26 Jan 2021
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
94
26
0
04 Jan 2021
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
David Martínez-Rubio
141
20
0
07 Dec 2020
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
Michael B. Cohen
Aaron Sidford
Kevin Tian
81
41
0
12 Nov 2020
AEGD: Adaptive Gradient Descent with Energy
Hailiang Liu
Xuping Tian
ODL
64
11
0
10 Oct 2020
Structured Logconcave Sampling with a Restricted Gaussian Oracle
Y. Lee
Ruoqi Shen
Kevin Tian
107
73
0
07 Oct 2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
137
190
0
05 Oct 2020
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
120
117
0
02 Oct 2020
Cross Learning in Deep Q-Networks
Xing Wang
A. Vinel
27
2
0
29 Sep 2020
Effective Proximal Methods for Non-convex Non-smooth Regularized Learning
Guannan Liang
Qianqian Tong
Jiahao Ding
Miao Pan
J. Bi
71
0
0
14 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
87
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
121
130
0
25 Aug 2020
An Accelerated DFO Algorithm for Finite-sum Convex Functions
Yuwen Chen
Antonio Orvieto
Aurelien Lucchi
89
15
0
07 Jul 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi-An Ma
171
23
0
18 Jun 2020
Nearly Linear Row Sampling Algorithm for Quantile Regression
Yi Li
Ruosong Wang
Lin F. Yang
Hanrui Zhang
59
7
0
15 Jun 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
101
36
0
12 Jun 2020
Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity
J.N. Zhang
Hongzhou Lin
Subhro Das
S. Sra
Ali Jadbabaie
46
1
0
08 Jun 2020
Improved SVRG for quadratic functions
N. Kahalé
55
0
0
01 Jun 2020
An Optimal Algorithm for Decentralized Finite Sum Optimization
Aymeric Dieuleveut
Francis R. Bach
Laurent Massoulie
73
45
0
20 May 2020
Momentum with Variance Reduction for Nonconvex Composition Optimization
Ziyi Chen
Yi Zhou
ODL
75
3
0
15 May 2020
Spike-Triggered Descent
Michael Kummer
Arunava Banerjee
20
0
0
12 May 2020
Flexible numerical optimization with ensmallen
Ryan R. Curtin
Marcus Edel
Rahul Prabhu
S. Basak
Zhihao Lou
Conrad Sanderson
86
1
0
09 Mar 2020
On the Convergence of Nesterov's Accelerated Gradient Method in Stochastic Settings
Mahmoud Assran
Michael G. Rabbat
80
59
0
27 Feb 2020
On Biased Compression for Distributed Learning
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
111
189
0
27 Feb 2020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
129
137
0
26 Feb 2020
Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Bao Wang
T. Nguyen
Andrea L. Bertozzi
Richard G. Baraniuk
Stanley J. Osher
ODL
82
49
0
24 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
114
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
84
17
0
11 Feb 2020
Federated Learning of a Mixture of Global and Local Models
Filip Hanzely
Peter Richtárik
FedML
103
388
0
10 Feb 2020
Variance Reduction with Sparse Gradients
Melih Elibol
Lihua Lei
Michael I. Jordan
67
23
0
27 Jan 2020
Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Zhaoxian Wu
Qing Ling
Tianyi Chen
G. Giannakis
FedML
AAML
123
186
0
29 Dec 2019
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
137
169
0
19 Dec 2019
Support Vector Machine Classifier via
L
0
/
1
L_{0/1}
L
0/1
Soft-Margin Loss
Huajun Wang
Yuanhai Shao
Shenglong Zhou
Ce Zhang
N. Xiu
VLM
68
52
0
16 Dec 2019
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
D. Kovalev
Konstantin Mishchenko
Peter Richtárik
ODL
80
45
0
03 Dec 2019
Katyusha Acceleration for Convex Finite-Sum Compositional Optimization
Yibo Xu
Yangyang Xu
128
13
0
24 Oct 2019
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
79
32
0
22 Oct 2019
A Stochastic Extra-Step Quasi-Newton Method for Nonsmooth Nonconvex Optimization
Minghan Yang
Andre Milzarek
Zaiwen Wen
Tong Zhang
ODL
98
36
0
21 Oct 2019
A Stochastic Proximal Point Algorithm for Saddle-Point Problems
Luo Luo
Cheng Chen
Yujun Li
Guangzeng Xie
Zhihua Zhang
146
16
0
13 Sep 2019
Previous
1
2
3
4
Next