ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1309.2388
  4. Cited By
Minimizing Finite Sums with the Stochastic Average Gradient
v1v2 (latest)

Minimizing Finite Sums with the Stochastic Average Gradient

10 September 2013
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
ArXiv (abs)PDFHTML

Papers citing "Minimizing Finite Sums with the Stochastic Average Gradient"

50 / 506 papers shown
Title
Duality-free Methods for Stochastic Composition Optimization
Duality-free Methods for Stochastic Composition Optimization
Liu Liu
Ji Liu
Dacheng Tao
73
16
0
26 Oct 2017
Optimal Rates for Learning with Nyström Stochastic Gradient Methods
Optimal Rates for Learning with Nyström Stochastic Gradient Methods
Junhong Lin
Lorenzo Rosasco
90
7
0
21 Oct 2017
A Novel Stochastic Stratified Average Gradient Method: Convergence Rate
  and Its Complexity
A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity
Aixiang Chen
Bingchuan Chen
Xiaolong Chai
Rui-Ling Bian
Hengguang Li
65
22
0
21 Oct 2017
Tracking the gradients using the Hessian: A new look at variance
  reducing stochastic methods
Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
Robert Mansel Gower
Nicolas Le Roux
Francis R. Bach
49
31
0
20 Oct 2017
Smooth and Sparse Optimal Transport
Smooth and Sparse Optimal Transport
Mathieu Blondel
Vivien Seguy
Antoine Rolet
OT
96
175
0
17 Oct 2017
Nesterov's Acceleration For Approximate Newton
Nesterov's Acceleration For Approximate Newton
Haishan Ye
Zhihua Zhang
ODL
45
13
0
17 Oct 2017
Sign-Constrained Regularized Loss Minimization
Sign-Constrained Regularized Loss Minimization
Tsuyoshi Kato
Misato Kobayashi
Daisuke Sano
36
0
0
12 Oct 2017
Fast and Safe: Accelerated gradient methods with optimality certificates
  and underestimate sequences
Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences
Majid Jahani
N. V. C. Gudapati
Chenxin Ma
R. Tappenden
Martin Takáč
59
7
0
10 Oct 2017
SGD for robot motion? The effectiveness of stochastic optimization on a
  new benchmark for biped locomotion tasks
SGD for robot motion? The effectiveness of stochastic optimization on a new benchmark for biped locomotion tasks
Martim Brandao
K. Hashimoto
A. Takanishi
60
6
0
09 Oct 2017
A Generic Approach for Escaping Saddle points
A Generic Approach for Escaping Saddle points
Sashank J. Reddi
Manzil Zaheer
S. Sra
Barnabás Póczós
Francis R. Bach
Ruslan Salakhutdinov
Alex Smola
120
84
0
05 Sep 2017
A Convergence Analysis for A Class of Practical Variance-Reduction
  Stochastic Gradient MCMC
A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC
Changyou Chen
Wenlin Wang
Yizhe Zhang
Qinliang Su
Lawrence Carin
94
28
0
04 Sep 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
106
246
0
29 Aug 2017
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian
  Information
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
Peng Xu
Farbod Roosta-Khorasani
Michael W. Mahoney
131
214
0
23 Aug 2017
Variance-Reduced Stochastic Learning by Networked Agents under Random
  Reshuffling
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
Kun Yuan
Bicheng Ying
Jiageng Liu
Ali H. Sayed
56
4
0
04 Aug 2017
A Robust Multi-Batch L-BFGS Method for Machine Learning
A Robust Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
Martin Takáč
AAMLODL
111
44
0
26 Jul 2017
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite
  Optimization
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
Fabian Pedregosa
Rémi Leblond
Simon Lacoste-Julien
86
34
0
20 Jul 2017
Stochastic Variance Reduction Gradient for a Non-convex Problem Using
  Graduated Optimization
Stochastic Variance Reduction Gradient for a Non-convex Problem Using Graduated Optimization
Li Chen
Shuisheng Zhou
Zhuan Zhang
46
3
0
10 Jul 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
75
38
0
04 Jul 2017
Generalization Properties of Doubly Stochastic Learning Algorithms
Generalization Properties of Doubly Stochastic Learning Algorithms
Junhong Lin
Lorenzo Rosasco
62
7
0
03 Jul 2017
Optimization Methods for Supervised Machine Learning: From Linear Models
  to Deep Learning
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
98
45
0
30 Jun 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System
  Theory and Quadratic Constraints
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
121
35
0
25 Jun 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
104
187
0
15 Jun 2017
Limitations on Variance-Reduction and Acceleration Schemes for Finite
  Sum Optimization
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization
Yossi Arjevani
63
12
0
06 Jun 2017
Stochastic Reformulations of Linear Systems: Algorithms and Convergence
  Theory
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
Peter Richtárik
Martin Takáč
92
94
0
04 Jun 2017
Near-linear time approximation algorithms for optimal transport via
  Sinkhorn iteration
Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration
Jason M. Altschuler
Jonathan Niles-Weed
Philippe Rigollet
OT
93
596
0
26 May 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
75
94
0
20 May 2017
Nestrov's Acceleration For Second Order Method
Haishan Ye
Zhihua Zhang
ODL
32
4
0
19 May 2017
An Investigation of Newton-Sketch and Subsampled Newton Methods
An Investigation of Newton-Sketch and Subsampled Newton Methods
A. Berahas
Raghu Bollapragada
J. Nocedal
104
114
0
17 May 2017
Determinantal Point Processes for Mini-Batch Diversification
Determinantal Point Processes for Mini-Batch Diversification
Cheng Zhang
Hedvig Kjellström
Stephan Mandt
80
35
0
01 May 2017
Limits of End-to-End Learning
Limits of End-to-End Learning
Tobias Glasmachers
68
158
0
26 Apr 2017
Batch-Expansion Training: An Efficient Optimization Framework
Batch-Expansion Training: An Efficient Optimization Framework
Michal Derezinski
D. Mahajan
S. Keerthi
S.V.N. Vishwanathan
Markus Weimer
37
6
0
22 Apr 2017
Importance Sampled Stochastic Optimization for Variational Inference
Importance Sampled Stochastic Optimization for Variational Inference
J. Sakaya
Arto Klami
BDL
48
7
0
19 Apr 2017
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic
  Optimization with Progressive Variance Reduction
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction
Fanhua Shang
21
1
0
17 Apr 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
163
154
0
17 Apr 2017
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration
  Strategies
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
Renbo Zhao
W. Haskell
Vincent Y. F. Tan
53
29
0
01 Apr 2017
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Courtney Paquette
Hongzhou Lin
Dmitriy Drusvyatskiy
Julien Mairal
Zaïd Harchaoui
ODL
76
40
0
31 Mar 2017
Fast Stochastic Variance Reduced Gradient Method with Momentum
  Acceleration for Machine Learning
Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning
Fanhua Shang
Yuanyuan Liu
James Cheng
Jiacheng Zhuo
ODL
51
23
0
23 Mar 2017
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient
  Descent
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent
Fanhua Shang
Yuanyuan Liu
James Cheng
K. K. Ng
Yuichi Yoshida
98
3
0
20 Mar 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for
  Regularized Empirical Risk Minimization
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
124
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
175
607
0
01 Mar 2017
SAGA and Restricted Strong Convexity
SAGA and Restricted Strong Convexity
Chao Qu
Yan Li
Huan Xu
51
5
0
19 Feb 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
110
80
0
02 Feb 2017
IQN: An Incremental Quasi-Newton Method with Local Superlinear
  Convergence Rate
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
Aryan Mokhtari
Mark Eisen
Alejandro Ribeiro
98
74
0
02 Feb 2017
Linear convergence of SDCA in statistical estimation
Linear convergence of SDCA in statistical estimation
Chao Qu
Huan Xu
75
8
0
26 Jan 2017
A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank
  Matrix Recovery
A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery
Lingxiao Wang
Xiao Zhang
Quanquan Gu
89
11
0
09 Jan 2017
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix
  Recovery from Linear Measurements
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix Recovery from Linear Measurements
Xiao Zhang
Lingxiao Wang
Quanquan Gu
70
6
0
02 Jan 2017
Parsimonious Online Learning with Kernels via Sparse Projections in
  Function Space
Parsimonious Online Learning with Kernels via Sparse Projections in Function Space
Alec Koppel
Garrett A. Warnell
Ethan Stump
Alejandro Ribeiro
70
79
0
13 Dec 2016
Subsampled online matrix factorization with convergence guarantees
Subsampled online matrix factorization with convergence guarantees
A. Mensch
Julien Mairal
Gaël Varoquaux
Bertrand Thirion
20
2
0
30 Nov 2016
Accelerated Variance Reduced Block Coordinate Descent
Accelerated Variance Reduced Block Coordinate Descent
Zebang Shen
Hui Qian
Chao Zhang
Tengfei Zhou
44
1
0
13 Nov 2016
Linear Convergence of SVRG in Statistical Estimation
Linear Convergence of SVRG in Statistical Estimation
Chao Qu
Yan Li
Huan Xu
76
11
0
07 Nov 2016
Previous
123...1011789
Next