ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.02442
  4. Cited By
A Simple Practical Accelerated Method for Finite Sums

A Simple Practical Accelerated Method for Finite Sums

8 February 2016
Aaron Defazio
ArXivPDFHTML

Papers citing "A Simple Practical Accelerated Method for Finite Sums"

24 / 24 papers shown
Title
A Coefficient Makes SVRG Effective
A Coefficient Makes SVRG Effective
Yida Yin
Zhiqiu Xu
Zhiyuan Li
Trevor Darrell
Zhuang Liu
33
1
0
09 Nov 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
22
11
0
15 Apr 2023
On the fast convergence of minibatch heavy ball momentum
On the fast convergence of minibatch heavy ball momentum
Raghu Bollapragada
Tyler Chen
Rachel A. Ward
26
17
0
15 Jun 2022
An Adaptive Incremental Gradient Method With Support for Non-Euclidean
  Norms
An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms
Binghui Xie
Chen Jin
Kaiwen Zhou
James Cheng
Wei Meng
37
1
0
28 Apr 2022
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
47
186
0
05 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
39
50
0
08 Jul 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Gradient tracking and variance reduction for decentralized optimization
  and machine learning
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
19
10
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
25
30
0
22 Oct 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly
  Convex Distributed Finite Sums
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
8
26
0
28 Jan 2019
99% of Distributed Optimization is a Waste of Time: The Issue and How to
  Fix it
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
16
13
0
27 Jan 2019
On the Ineffectiveness of Variance Reduced Optimization for Deep
  Learning
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio
Léon Bottou
UQCV
DRL
23
112
0
11 Dec 2018
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence
  Rates
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Kaiwen Zhou
Fanhua Shang
James Cheng
14
74
0
28 Jun 2018
Towards More Efficient Stochastic Decentralized Learning: Faster
  Convergence and Sparse Communication
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
25
56
0
25 May 2018
On the insufficiency of existing momentum schemes for Stochastic
  Optimization
On the insufficiency of existing momentum schemes for Stochastic Optimization
Rahul Kidambi
Praneeth Netrapalli
Prateek Jain
Sham Kakade
ODL
22
117
0
15 Mar 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
19
199
0
27 Dec 2017
Stochastic Nonconvex Optimization with Large Minibatches
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
36
26
0
25 Sep 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System
  Theory and Quadratic Constraints
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
24
35
0
25 Jun 2017
Memory and Communication Efficient Distributed Stochastic Optimization
  with Minibatch-Prox
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox
Jialei Wang
Weiran Wang
Nathan Srebro
16
54
0
21 Feb 2017
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
51
1,876
0
08 Oct 2016
AIDE: Fast and Communication Efficient Distributed Optimization
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
16
150
0
24 Aug 2016
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1