Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1603.05953
Cited By
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
18 March 2016
Zeyuan Allen-Zhu
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Katyusha: The First Direct Acceleration of Stochastic Gradient Methods"
50 / 297 papers shown
Title
Near Optimal Stochastic Algorithms for Finite-Sum Unbalanced Convex-Concave Minimax Optimization
Luo Luo
Guangzeng Xie
Tong Zhang
Zhihua Zhang
15
18
0
03 Jun 2021
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
20
10
0
25 May 2021
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality
Jonathan Lacotte
Yifei Wang
Mert Pilanci
10
15
0
15 May 2021
Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
14
19
0
04 May 2021
Generalization of GANs and overparameterized models under Lipschitz continuity
Khoat Than
Nghia D. Vu
AI4CE
28
2
0
06 Apr 2021
Stochastic Reweighted Gradient Descent
Ayoub El Hanchi
D. Stephens
27
8
0
23 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
45
14
0
21 Mar 2021
Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
75
16
0
26 Feb 2021
Machine Unlearning via Algorithmic Stability
Enayat Ullah
Tung Mai
Anup B. Rao
Ryan Rossi
R. Arora
35
103
0
25 Feb 2021
Learning with User-Level Privacy
Daniel Levy
Ziteng Sun
Kareem Amin
Satyen Kale
Alex Kulesza
M. Mohri
A. Suresh
FedML
32
89
0
23 Feb 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark Schmidt
Simon Lacoste-Julien
20
18
0
18 Feb 2021
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks
D. Kovalev
Egor Shulgin
Peter Richtárik
Alexander Rogozin
Alexander Gasnikov
ODL
35
31
0
18 Feb 2021
Stochastic Variance Reduction for Variational Inequality Methods
Ahmet Alacaoglu
Yura Malitsky
58
68
0
16 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
22
24
0
14 Feb 2021
Complementary Composite Minimization, Small Gradients in General Norms, and Applications
Jelena Diakonikolas
Cristóbal Guzmán
26
13
0
26 Jan 2021
A Comprehensive Study on Optimization Strategies for Gradient Descent In Deep Learning
K. Yadav
17
0
0
07 Jan 2021
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications
Xiang Li
Zhihua Zhang
30
4
0
05 Jan 2021
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
31
25
0
04 Jan 2021
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
David Martínez-Rubio
45
19
0
07 Dec 2020
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
Michael B. Cohen
Aaron Sidford
Kevin Tian
13
39
0
12 Nov 2020
Factorization Machines with Regularization for Sparse Feature Interactions
Kyohei Atarashi
S. Oyama
M. Kurihara
19
5
0
19 Oct 2020
Tight Lower Complexity Bounds for Strongly Convex Finite-Sum Optimization
Min Zhang
Yao Shu
Kun He
21
1
0
17 Oct 2020
AEGD: Adaptive Gradient Descent with Energy
Hailiang Liu
Xuping Tian
ODL
27
11
0
10 Oct 2020
Structured Logconcave Sampling with a Restricted Gaussian Oracle
Y. Lee
Ruoqi Shen
Kevin Tian
16
68
0
07 Oct 2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
57
187
0
05 Oct 2020
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
21
112
0
02 Oct 2020
Cross Learning in Deep Q-Networks
Xing Wang
A. Vinel
16
2
0
29 Sep 2020
Escaping Saddle-Points Faster under Interpolation-like Conditions
Abhishek Roy
Krishnakumar Balasubramanian
Saeed Ghadimi
P. Mohapatra
17
1
0
28 Sep 2020
Asynchronous Distributed Optimization with Stochastic Delays
Margalit Glasgow
Mary Wootters
17
3
0
22 Sep 2020
Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization
Pan Zhou
Xiaotong Yuan
21
6
0
18 Sep 2020
Effective Proximal Methods for Non-convex Non-smooth Regularized Learning
Guannan Liang
Qianqian Tong
Jiahao Ding
Miao Pan
J. Bi
22
0
0
14 Sep 2020
A general framework for decentralized optimization with first-order methods
Ran Xin
Shi Pu
Angelia Nedić
U. Khan
17
87
0
12 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
39
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
126
0
25 Aug 2020
Fast and Near-Optimal Diagonal Preconditioning
A. Jambulapati
Jingkai Li
Christopher Musco
Aaron Sidford
Kevin Tian
8
6
0
04 Aug 2020
Accelerated Stochastic Gradient-free and Projection-free Methods
Feihu Huang
Lue Tao
Songcan Chen
13
21
0
16 Jul 2020
Streaming Complexity of SVMs
Alexandr Andoni
Collin Burns
Yi Li
S. Mahabadi
David P. Woodruff
22
5
0
07 Jul 2020
An Accelerated DFO Algorithm for Finite-sum Convex Functions
Yuwen Chen
Antonio Orvieto
Aurelien Lucchi
13
15
0
07 Jul 2020
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert Mansel Gower
Peter Richtárik
16
46
0
20 Jun 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Enhance Curvature Information by Structured Stochastic Quasi-Newton Methods
Minghan Yang
Dong Xu
Yongfeng Li
Zaiwen Wen
Mengyun Chen
ODL
6
3
0
17 Jun 2020
Nearly Linear Row Sampling Algorithm for Quantile Regression
Yi Li
Ruosong Wang
Lin F. Yang
Hanrui Zhang
19
7
0
15 Jun 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
36
36
0
12 Jun 2020
Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity
Jiaming Zhang
Hongzhou Lin
Subhro Das
S. Sra
Ali Jadbabaie
6
1
0
08 Jun 2020
Improved SVRG for quadratic functions
N. Kahalé
25
0
0
01 Jun 2020
Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case Rates
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
14
5
0
25 May 2020
An Optimal Algorithm for Decentralized Finite Sum Optimization
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulie
17
45
0
20 May 2020
Momentum with Variance Reduction for Nonconvex Composition Optimization
Ziyi Chen
Yi Zhou
ODL
21
3
0
15 May 2020
Spike-Triggered Descent
Michael Kummer
Arunava Banerjee
11
0
0
12 May 2020
Flexible numerical optimization with ensmallen
Ryan R. Curtin
Marcus Edel
Rahul Prabhu
S. Basak
Zhihao Lou
Conrad Sanderson
18
1
0
09 Mar 2020
Previous
1
2
3
4
5
6
Next