ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.04809
  4. Cited By
ASAGA: Asynchronous Parallel SAGA

ASAGA: Asynchronous Parallel SAGA

15 June 2016
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
    AI4TS
ArXivPDFHTML

Papers citing "ASAGA: Asynchronous Parallel SAGA"

19 / 19 papers shown
Title
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Siyuan Yu
Wei Chen
H. V. Poor
32
0
0
17 Jun 2024
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
33
288
0
11 Jun 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
32
0
04 Mar 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Privacy-Preserving Asynchronous Federated Learning Algorithms for
  Multi-Party Vertically Collaborative Learning
Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning
Bin Gu
An Xu
Zhouyuan Huo
Cheng Deng
Heng-Chiao Huang
FedML
38
27
0
14 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
A Unifying Framework for Variance Reduction Algorithms for Finding
  Zeroes of Monotone Operators
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
9
3
0
22 Jun 2019
Block stochastic gradient descent for large-scale tomographic
  reconstruction in a parallel network
Block stochastic gradient descent for large-scale tomographic reconstruction in a parallel network
Yushan Gao
A. Biguri
T. Blumensath
23
3
0
28 Mar 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly
  Convex Distributed Finite Sums
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
8
26
0
28 Jan 2019
POLO: a POLicy-based Optimization library
POLO: a POLicy-based Optimization library
Arda Aytekin
Martin Biel
M. Johansson
20
3
0
08 Oct 2018
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers
Nuwan S. Ferdinand
S. Draper
13
19
0
06 Oct 2018
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence
  Rates
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Kaiwen Zhou
Fanhua Shang
James Cheng
14
74
0
28 Jun 2018
Double Quantization for Communication-Efficient Distributed Optimization
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
Parallel and Distributed Successive Convex Approximation Methods for
  Big-Data Optimization
Parallel and Distributed Successive Convex Approximation Methods for Big-Data Optimization
G. Scutari
Ying Sun
35
61
0
17 May 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
22
193
0
03 Mar 2018
Improved asynchronous parallel optimization analysis for stochastic
  incremental methods
Improved asynchronous parallel optimization analysis for stochastic incremental methods
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
16
70
0
11 Jan 2018
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
51
1,876
0
08 Oct 2016
ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate
  Updates
ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate Updates
Zhimin Peng
Yangyang Xu
Ming Yan
W. Yin
14
257
0
08 Jun 2015
1