ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.01972
  4. Cited By
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

5 June 2015
Zeyuan Allen-Zhu
Yang Yuan
ArXivPDFHTML

Papers citing "Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives"

32 / 32 papers shown
Title
Computing Approximate $\ell_p$ Sensitivities
Computing Approximate ℓp\ell_pℓp​ Sensitivities
Swati Padmanabhan
David P. Woodruff
Qiuyi Zhang
45
0
0
07 Nov 2023
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates
  and Practical Features
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
25
5
0
23 Apr 2023
Rethinking Model Ensemble in Transfer-based Adversarial Attacks
Rethinking Model Ensemble in Transfer-based Adversarial Attacks
Huanran Chen
Yichi Zhang
Yinpeng Dong
Xiao Yang
Hang Su
Junyi Zhu
AAML
28
56
0
16 Mar 2023
Adaptive Stochastic Optimisation of Nonconvex Composite Objectives
Adaptive Stochastic Optimisation of Nonconvex Composite Objectives
Weijia Shao
F. Sivrikaya
S. Albayrak
16
0
0
21 Nov 2022
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
  Cocoercive Variational Inequalities
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
37
2
0
12 Oct 2022
Efficiency Ordering of Stochastic Gradient Descent
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
31
6
0
15 Sep 2022
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
52
5
0
06 Jun 2022
SADAM: Stochastic Adam, A Stochastic Operator for First-Order
  Gradient-based Optimizer
SADAM: Stochastic Adam, A Stochastic Operator for First-Order Gradient-based Optimizer
Wei Zhang
Yun-Jian Bao
ODL
25
2
0
20 May 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
  Convergence
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
27
16
0
20 Apr 2022
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
26
7
0
26 Nov 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
A Unifying Framework for Variance Reduction Algorithms for Finding
  Zeroes of Monotone Operators
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
6
3
0
22 Jun 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite
  Nonconvex Optimization
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
Characterization of Convex Objective Functions and Optimal Expected
  Convergence Rates for SGD
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
36
6
0
09 Oct 2018
Continuous-time Models for Stochastic Optimization Algorithms
Continuous-time Models for Stochastic Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
16
31
0
05 Oct 2018
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
21
358
0
05 Jun 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
28
245
0
29 Aug 2017
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex
  Optimization
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
Qunwei Li
Yi Zhou
Yingbin Liang
P. Varshney
21
94
0
14 May 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for
  Regularized Empirical Risk Minimization
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
33
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
28
596
0
01 Mar 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
15
80
0
02 Feb 2017
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and
  Faster MMWU
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Zeyuan Allen-Zhu
Yuanzhi Li
22
44
0
06 Jan 2017
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
  Method
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
23
96
0
12 Sep 2016
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
Zeyuan Allen-Zhu
Yuanzhi Li
23
128
0
12 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth
  Condition
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
28
11
0
04 Jul 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
17
54
0
23 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
16
390
0
17 Mar 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
90
736
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1