Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1506.01972
Cited By
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
5 June 2015
Zeyuan Allen-Zhu
Yang Yuan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives"
32 / 32 papers shown
Title
Computing Approximate
ℓ
p
\ell_p
ℓ
p
Sensitivities
Swati Padmanabhan
David P. Woodruff
Qiuyi Zhang
45
0
0
07 Nov 2023
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
25
5
0
23 Apr 2023
Rethinking Model Ensemble in Transfer-based Adversarial Attacks
Huanran Chen
Yichi Zhang
Yinpeng Dong
Xiao Yang
Hang Su
Junyi Zhu
AAML
28
56
0
16 Mar 2023
Adaptive Stochastic Optimisation of Nonconvex Composite Objectives
Weijia Shao
F. Sivrikaya
S. Albayrak
16
0
0
21 Nov 2022
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
37
2
0
12 Oct 2022
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
31
6
0
15 Sep 2022
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
52
5
0
06 Jun 2022
SADAM: Stochastic Adam, A Stochastic Operator for First-Order Gradient-based Optimizer
Wei Zhang
Yun-Jian Bao
ODL
25
2
0
20 May 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
27
16
0
20 Apr 2022
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
26
7
0
26 Nov 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
6
3
0
22 Jun 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
36
6
0
09 Oct 2018
Continuous-time Models for Stochastic Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
16
31
0
05 Oct 2018
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
21
358
0
05 Jun 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
28
245
0
29 Aug 2017
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
Qunwei Li
Yi Zhou
Yingbin Liang
P. Varshney
21
94
0
14 May 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
33
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
28
596
0
01 Mar 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
15
80
0
02 Feb 2017
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Zeyuan Allen-Zhu
Yuanzhi Li
22
44
0
06 Jan 2017
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
23
96
0
12 Sep 2016
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
Zeyuan Allen-Zhu
Yuanzhi Li
23
128
0
12 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
28
11
0
04 Jul 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
17
54
0
23 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
16
390
0
17 Mar 2016
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
90
736
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1