ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.00102
  4. Cited By
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

1 March 2017
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
    ODL
ArXivPDFHTML

Papers citing "SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient"

25 / 125 papers shown
Title
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
An Improved Convergence Analysis of Stochastic Variance-Reduced Policy
  Gradient
An Improved Convergence Analysis of Stochastic Variance-Reduced Policy Gradient
Pan Xu
F. Gao
Quanquan Gu
10
93
0
29 May 2019
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic
  Gradient Methods
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and
  Projection Free
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free
Mingrui Zhang
Lin Chen
Aryan Mokhtari
Hamed Hassani
Amin Karbasi
16
8
0
17 Feb 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite
  Nonconvex Optimization
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
16
139
0
15 Feb 2019
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
24
40
0
28 Jan 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
32
44
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
36
155
0
24 Jan 2019
On the Ineffectiveness of Variance Reduced Optimization for Deep
  Learning
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio
Léon Bottou
UQCV
DRL
23
112
0
11 Dec 2018
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with
  Curvature Independent Rate
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
Jiaming Zhang
Hongyi Zhang
S. Sra
26
39
0
10 Nov 2018
New Convergence Aspects of Stochastic Gradient Algorithms
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical
  Risk Minimization via Accumulating Sample Strategy
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
22
18
0
26 Oct 2018
SpiderBoost and Momentum: Faster Stochastic Variance Reduction
  Algorithms
SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms
Zhe Wang
Kaiyi Ji
Yi Zhou
Yingbin Liang
Vahid Tarokh
ODL
35
81
0
25 Oct 2018
Characterization of Convex Objective Functions and Optimal Expected
  Convergence Rates for SGD
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
36
6
0
09 Oct 2018
SEGA: Variance Reduction via Gradient Sketching
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
25
71
0
09 Sep 2018
On the Acceleration of L-BFGS with Second-Order Information and
  Stochastic Batches
On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches
Jie Liu
Yu Rong
Martin Takáč
Junzhou Huang
ODL
38
7
0
14 Jul 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
50
571
0
04 Jul 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
194
0
03 Mar 2018
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
Lam M. Nguyen
Phuong Ha Nguyen
Marten van Dijk
Peter Richtárik
K. Scheinberg
Martin Takáč
47
226
0
11 Feb 2018
Optimization Methods for Supervised Machine Learning: From Linear Models
  to Deep Learning
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
39
45
0
30 Jun 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
11
94
0
20 May 2017
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme
  under Weak Strong Convexity Assumption
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption
Jie Liu
Martin Takáč
ODL
20
4
0
16 Dec 2016
Accelerated Randomized Mirror Descent Algorithms For Composite
  Non-strongly Convex Optimization
Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization
L. Hien
Cuong V Nguyen
Huan Xu
Canyi Lu
Jiashi Feng
28
19
0
23 May 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Previous
123