Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
1603.05953
Cited By
v1
v2
v3
v4
v5
v6 (latest)
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
18 March 2016
Zeyuan Allen-Zhu
ODL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Katyusha: The First Direct Acceleration of Stochastic Gradient Methods"
42 / 192 papers shown
Title
Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization
Takayuki Kawashima
Hironori Fujisawa
38
2
0
21 May 2018
Stochastic model-based minimization of weakly convex functions
Damek Davis
Dmitriy Drusvyatskiy
93
377
0
17 Mar 2018
On the insufficiency of existing momentum schemes for Stochastic Optimization
Rahul Kidambi
Praneeth Netrapalli
Prateek Jain
Sham Kakade
ODL
109
120
0
15 Mar 2018
A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
Andre Milzarek
X. Xiao
Shicong Cen
Zaiwen Wen
M. Ulbrich
68
36
0
09 Mar 2018
Not All Samples Are Created Equal: Deep Learning with Importance Sampling
Angelos Katharopoulos
François Fleuret
137
526
0
02 Mar 2018
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Fanhua Shang
Kaiwen Zhou
Hongying Liu
James Cheng
Ivor W. Tsang
Lijun Zhang
Dacheng Tao
L. Jiao
98
67
0
26 Feb 2018
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Di Wang
Minwei Ye
Jinhui Xu
147
272
0
14 Feb 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
122
116
0
13 Feb 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
105
52
0
12 Feb 2018
Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization
Feihu Huang
Songcan Chen
83
21
0
08 Feb 2018
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
141
122
0
05 Feb 2018
How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD
Zeyuan Allen-Zhu
ODL
134
171
0
08 Jan 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
82
204
0
27 Dec 2017
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma
Raef Bassily
M. Belkin
117
291
0
18 Dec 2017
Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
Hongzhou Lin
Julien Mairal
Zaïd Harchaoui
94
140
0
15 Dec 2017
Random gradient extrapolation for distributed and stochastic optimization
Guanghui Lan
Yi Zhou
90
52
0
15 Nov 2017
Duality-free Methods for Stochastic Composition Optimization
Liu Liu
Ji Liu
Dacheng Tao
81
16
0
26 Oct 2017
Nesterov's Acceleration For Approximate Newton
Haishan Ye
Zhihua Zhang
ODL
49
13
0
17 Oct 2017
First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
Aryan Mokhtari
Alejandro Ribeiro
77
20
0
02 Sep 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
144
246
0
29 Aug 2017
An inexact subsampled proximal Newton-type method for large-scale machine learning
Xuanqing Liu
Cho-Jui Hsieh
Jason D. Lee
Yuekai Sun
72
15
0
28 Aug 2017
Accelerated Variance Reduced Stochastic ADMM
Yuanyuan Liu
Fanhua Shang
James Cheng
79
41
0
11 Jul 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
83
38
0
04 Jul 2017
Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations
Jialei Wang
Tong Zhang
83
12
0
21 Jun 2017
SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms
Yifei Jin
Lingxiao Huang
Jian Li
39
0
0
20 May 2017
Nestrov's Acceleration For Second Order Method
Haishan Ye
Zhihua Zhang
ODL
39
4
0
19 May 2017
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
Cameron Musco
Praneeth Netrapalli
Aaron Sidford
Shashanka Ubaru
David P. Woodruff
170
36
0
13 Apr 2017
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
Renbo Zhao
W. Haskell
Vincent Y. F. Tan
53
29
0
01 Apr 2017
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Courtney Paquette
Hongzhou Lin
Dmitriy Drusvyatskiy
Julien Mairal
Zaïd Harchaoui
ODL
90
40
0
31 Mar 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
138
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
193
608
0
01 Mar 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
135
80
0
02 Feb 2017
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
196
1,916
0
08 Oct 2016
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
A. Bietti
Julien Mairal
226
36
0
04 Oct 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
139
96
0
12 Sep 2016
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation
Zeyuan Allen-Zhu
Yuanzhi Li
83
20
0
16 Aug 2016
Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition
Zeyuan Allen-Zhu
Yuanzhi Li
94
51
0
20 Jul 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
159
185
0
25 May 2016
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
141
392
0
17 Mar 2016
Optimal Black-Box Reductions Between Optimization Objectives
Zeyuan Allen-Zhu
Elad Hazan
117
96
0
17 Mar 2016
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
106
58
0
14 Mar 2016
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
138
265
0
10 Sep 2014
Previous
1
2
3
4