Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1309.2388
Cited By
Minimizing Finite Sums with the Stochastic Average Gradient
10 September 2013
Mark W. Schmidt
Nicolas Le Roux
Francis R. Bach
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Minimizing Finite Sums with the Stochastic Average Gradient"
50 / 503 papers shown
Title
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
26
7
0
26 Nov 2021
Distributed Policy Gradient with Variance Reduction in Multi-Agent Reinforcement Learning
Xiaoxiao Zhao
Jinlong Lei
Li Li
Jie-bin Chen
OffRL
18
2
0
25 Nov 2021
Variance Reduction in Deep Learning: More Momentum is All You Need
Lionel Tondji
S. Kashubin
Moustapha Cissé
ODL
11
1
0
23 Nov 2021
Linear Speedup in Personalized Collaborative Learning
El Mahdi Chayti
Sai Praneeth Karimireddy
Sebastian U. Stich
Nicolas Flammarion
Martin Jaggi
FedML
18
13
0
10 Nov 2021
Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for Linear Programming
Haihao Lu
Jinwen Yang
14
6
0
10 Nov 2021
The Internet of Federated Things (IoFT): A Vision for the Future and In-depth Survey of Data-driven Approaches for Federated Learning
Raed Al Kontar
Naichen Shi
Xubo Yue
Seokhyun Chung
E. Byon
...
C. Okwudire
Garvesh Raskutti
R. Saigal
Karandeep Singh
Ye Zhisheng
FedML
38
50
0
09 Nov 2021
Fast Line Search for Multi-Task Learning
A. Filatov
D. Merkulov
17
0
0
02 Oct 2021
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
Pushing on Text Readability Assessment: A Transformer Meets Handcrafted Linguistic Features
Bruce W. Lee
Yoonna Jang
J. Lee
VLM
41
75
0
25 Sep 2021
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information
Majid Jahani
S. Rusakov
Zheng Shi
Peter Richtárik
Michael W. Mahoney
Martin Takávc
ODL
16
25
0
11 Sep 2021
Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees
Hamid Reza Feyzmahdavian
M. Johansson
30
20
0
09 Sep 2021
COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic Convex Optimization
Manuel Madeira
Renato M. P. Negrinho
J. Xavier
P. Aguiar
16
0
0
07 Sep 2021
Anarchic Federated Learning
Haibo Yang
Xin Zhang
Prashant Khanduri
Jia Liu
FedML
24
58
0
23 Aug 2021
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
24
9
0
10 Aug 2021
Physics-informed Dyna-Style Model-Based Deep Reinforcement Learning for Dynamic Control
Xin-Yang Liu
Jian-Xun Wang
AI4CE
25
38
0
31 Jul 2021
Coordinate-wise Control Variates for Deep Policy Gradients
Yuanyi Zhong
Yuanshuo Zhou
Jian-wei Peng
BDL
14
1
0
11 Jul 2021
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
Nicolas Loizou
Hugo Berard
Gauthier Gidel
Ioannis Mitliagkas
Simon Lacoste-Julien
26
53
0
30 Jun 2021
The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence
Daogao Liu
Zhou Lu
LRM
16
1
0
28 Jun 2021
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
29
17
0
22 Jun 2021
Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Zhiyong Hao
Yixuan Jiang
Huihua Yu
H. Chiang
ODL
19
9
0
22 Jun 2021
Kernel Clustering with Sigmoid-based Regularization for Efficient Segmentation of Sequential Data
Tung Doan
Atsuhiro Takasu
35
1
0
22 Jun 2021
Extending the Abstraction of Personality Types based on MBTI with Machine Learning and Natural Language Processing
Carlos Basto
8
5
0
25 May 2021
Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
Xinmeng Huang
Kun Yuan
Xianghui Mao
W. Yin
22
12
0
25 Apr 2021
Generalization of GANs and overparameterized models under Lipschitz continuity
Khoat Than
Nghia D. Vu
AI4CE
22
2
0
06 Apr 2021
Stochastic Reweighted Gradient Descent
Ayoub El Hanchi
D. Stephens
16
8
0
23 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
Piecewise linear regression and classification
Alberto Bemporad
14
4
0
10 Mar 2021
A Retrospective Approximation Approach for Smooth Stochastic Optimization
David Newton
Raghu Bollapragada
R. Pasupathy
N. Yip
35
2
0
07 Mar 2021
Learning with Smooth Hinge Losses
Junru Luo
Hong Qiao
Bo-Wen Zhang
16
20
0
27 Feb 2021
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
Zheng Shi
Abdurakhmon Sadiev
Nicolas Loizou
Peter Richtárik
Martin Takávc
ODL
32
13
0
19 Feb 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method
Junyu Zhang
Chengzhuo Ni
Zheng Yu
Csaba Szepesvári
Mengdi Wang
46
67
0
17 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
32
51
0
14 Feb 2021
Federated Learning on Non-IID Data Silos: An Experimental Study
Yue Liu
Yiqun Diao
Quan Chen
Bingsheng He
FedML
OOD
92
946
0
03 Feb 2021
Variational Neural Annealing
Mohamed Hibat-Allah
E. Inack
R. Wiersema
R. Melko
Juan Carrasquilla
DRL
11
80
0
25 Jan 2021
Minibatch optimal transport distances; analysis and applications
Kilian Fatras
Younes Zine
Szymon Majewski
Rémi Flamary
Rémi Gribonval
Nicolas Courty
OT
71
53
0
05 Jan 2021
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications
Xiang Li
Zhihua Zhang
24
4
0
05 Jan 2021
Learning Sign-Constrained Support Vector Machines
Kenya Tajima
Takahiko Henmi
Kohei Tsuchida
E. R. R. Zara
Tsuyoshi Kato
9
1
0
05 Jan 2021
On Stochastic Variance Reduced Gradient Method for Semidefinite Optimization
Jinshan Zeng
Yixuan Zha
Ke Ma
Yuan Yao
9
0
0
01 Jan 2021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Haishan Ye
Wei Xiong
Tong Zhang
16
16
0
30 Dec 2020
Fast Incremental Expectation Maximization for finite-sum optimization: nonasymptotic convergence
G. Fort
Pierre Gach
Eric Moulines
13
9
0
29 Dec 2020
Stochastic Gradient Variance Reduction by Solving a Filtering Problem
Xingyi Yang
11
2
0
22 Dec 2020
Are we Forgetting about Compositional Optimisers in Bayesian Optimisation?
Antoine Grosnit
Alexander I. Cowen-Rivers
Rasul Tutunov
Ryan-Rhys Griffiths
Jun Wang
Haitham Bou-Ammar
19
13
0
15 Dec 2020
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
31
76
0
11 Dec 2020
Optimising cost vs accuracy of decentralised analytics in fog computing environments
Lorenzo Valerio
A. Passarella
M. Conti
32
1
0
09 Dec 2020
SGD_Tucker: A Novel Stochastic Optimization Strategy for Parallel Sparse Tucker Decomposition
Hao Li
Zixuan Li
KenLi Li
Jan S. Rellermeyer
L. Chen
Keqin Li
19
7
0
07 Dec 2020
Characterization of Excess Risk for Locally Strongly Convex Population Risk
Mingyang Yi
Ruoyu Wang
Zhi-Ming Ma
14
2
0
04 Dec 2020
Convergence of Gradient Algorithms for Nonconvex C^{1+alpha} Cost Functions
Zixuan Wang
Shanjian Tang
8
0
0
01 Dec 2020
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
Michael B. Cohen
Aaron Sidford
Kevin Tian
13
39
0
12 Nov 2020
Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
Ricky T. Q. Chen
Dami Choi
Lukas Balles
David Duvenaud
Philipp Hennig
ODL
44
6
0
09 Nov 2020
Previous
1
2
3
4
5
6
...
9
10
11
Next