Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1309.2388
Cited By
Minimizing Finite Sums with the Stochastic Average Gradient
10 September 2013
Mark W. Schmidt
Nicolas Le Roux
Francis R. Bach
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Minimizing Finite Sums with the Stochastic Average Gradient"
50 / 503 papers shown
Title
Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
Robert Mansel Gower
Nicolas Le Roux
Francis R. Bach
11
31
0
20 Oct 2017
Smooth and Sparse Optimal Transport
Mathieu Blondel
Vivien Seguy
Antoine Rolet
OT
21
171
0
17 Oct 2017
Nesterov's Acceleration For Approximate Newton
Haishan Ye
Zhihua Zhang
ODL
9
13
0
17 Oct 2017
Sign-Constrained Regularized Loss Minimization
Tsuyoshi Kato
Misato Kobayashi
Daisuke Sano
20
0
0
12 Oct 2017
Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences
Majid Jahani
N. V. C. Gudapati
Chenxin Ma
R. Tappenden
Martin Takáč
19
7
0
10 Oct 2017
SGD for robot motion? The effectiveness of stochastic optimization on a new benchmark for biped locomotion tasks
Martim Brandao
K. Hashimoto
A. Takanishi
36
6
0
09 Oct 2017
A Generic Approach for Escaping Saddle points
Sashank J. Reddi
Manzil Zaheer
S. Sra
Barnabás Póczós
Francis R. Bach
Ruslan Salakhutdinov
Alex Smola
13
83
0
05 Sep 2017
A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC
Changyou Chen
Wenlin Wang
Yizhe Zhang
Qinliang Su
Lawrence Carin
19
28
0
04 Sep 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
28
245
0
29 Aug 2017
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
Peng Xu
Farbod Roosta-Khorasani
Michael W. Mahoney
26
210
0
23 Aug 2017
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
Kun Yuan
Bicheng Ying
Jiageng Liu
Ali H. Sayed
16
4
0
04 Aug 2017
A Robust Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
Martin Takáč
AAML
ODL
17
44
0
26 Jul 2017
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
Fabian Pedregosa
Rémi Leblond
Simon Lacoste-Julien
21
34
0
20 Jul 2017
Stochastic Variance Reduction Gradient for a Non-convex Problem Using Graduated Optimization
Li Chen
Shuisheng Zhou
Zhuan Zhang
16
3
0
10 Jul 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
26
38
0
04 Jul 2017
Generalization Properties of Doubly Stochastic Learning Algorithms
Junhong Lin
Lorenzo Rosasco
29
7
0
03 Jul 2017
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
39
45
0
30 Jun 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
30
35
0
25 Jun 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
38
184
0
15 Jun 2017
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization
Yossi Arjevani
18
12
0
06 Jun 2017
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
Peter Richtárik
Martin Takáč
17
92
0
04 Jun 2017
Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration
Jason M. Altschuler
Jonathan Niles-Weed
Philippe Rigollet
OT
23
583
0
26 May 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
11
94
0
20 May 2017
Nestrov's Acceleration For Second Order Method
Haishan Ye
Zhihua Zhang
ODL
11
4
0
19 May 2017
An Investigation of Newton-Sketch and Subsampled Newton Methods
A. Berahas
Raghu Bollapragada
J. Nocedal
19
111
0
17 May 2017
Determinantal Point Processes for Mini-Batch Diversification
Cheng Zhang
Hedvig Kjellström
Stephan Mandt
19
35
0
01 May 2017
Limits of End-to-End Learning
Tobias Glasmachers
11
155
0
26 Apr 2017
Batch-Expansion Training: An Efficient Optimization Framework
Michal Derezinski
D. Mahajan
S. Keerthi
S.V.N. Vishwanathan
Markus Weimer
6
6
0
22 Apr 2017
Importance Sampled Stochastic Optimization for Variational Inference
J. Sakaya
Arto Klami
BDL
11
7
0
19 Apr 2017
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction
Fanhua Shang
9
1
0
17 Apr 2017
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
27
153
0
17 Apr 2017
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
Renbo Zhao
W. Haskell
Vincent Y. F. Tan
12
29
0
01 Apr 2017
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Courtney Paquette
Hongzhou Lin
Dmitriy Drusvyatskiy
Julien Mairal
Zaïd Harchaoui
ODL
21
40
0
31 Mar 2017
Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning
Fanhua Shang
Yuanyuan Liu
James Cheng
Jiacheng Zhuo
ODL
16
23
0
23 Mar 2017
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent
Fanhua Shang
Yuanyuan Liu
James Cheng
K. K. Ng
Yuichi Yoshida
14
3
0
20 Mar 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
33
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
28
597
0
01 Mar 2017
SAGA and Restricted Strong Convexity
C. Qu
Yan Li
Huan Xu
20
5
0
19 Feb 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
17
80
0
02 Feb 2017
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
Aryan Mokhtari
Mark Eisen
Alejandro Ribeiro
25
73
0
02 Feb 2017
Linear convergence of SDCA in statistical estimation
C. Qu
Huan Xu
43
8
0
26 Jan 2017
A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery
Lingxiao Wang
Xiao Zhang
Quanquan Gu
27
11
0
09 Jan 2017
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix Recovery from Linear Measurements
Xiao Zhang
Lingxiao Wang
Quanquan Gu
20
6
0
02 Jan 2017
Parsimonious Online Learning with Kernels via Sparse Projections in Function Space
Alec Koppel
Garrett A. Warnell
Ethan Stump
Alejandro Ribeiro
21
79
0
13 Dec 2016
Subsampled online matrix factorization with convergence guarantees
A. Mensch
Julien Mairal
Gaël Varoquaux
B. Thirion
11
2
0
30 Nov 2016
Accelerated Variance Reduced Block Coordinate Descent
Zebang Shen
Hui Qian
Chao Zhang
Tengfei Zhou
27
1
0
13 Nov 2016
Linear Convergence of SVRG in Statistical Estimation
C. Qu
Yan Li
Huan Xu
16
11
0
07 Nov 2016
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal
Zeyuan Allen-Zhu
Brian Bullins
Elad Hazan
Tengyu Ma
33
83
0
03 Nov 2016
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
27
36
0
01 Nov 2016
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Bin Gu
Zhouyuan Huo
Heng-Chiao Huang
23
10
0
29 Oct 2016
Previous
1
2
3
...
10
11
7
8
9
Next