ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08704
  4. Cited By
Optimal algorithms for smooth and strongly convex distributed
  optimization in networks

Optimal algorithms for smooth and strongly convex distributed optimization in networks

28 February 2017
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
ArXivPDFHTML

Papers citing "Optimal algorithms for smooth and strongly convex distributed optimization in networks"

8 / 58 papers shown
Title
Accelerated Decentralized Optimization with Local Updates for Smooth and
  Strongly Convex Objectives
Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
18
42
0
05 Oct 2018
COLA: Decentralized Linear Learning
COLA: Decentralized Linear Learning
Lie He
An Bian
Martin Jaggi
24
117
0
13 Aug 2018
Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
Pavel Dvurechensky
D. Dvinskikh
Alexander Gasnikov
César A. Uribe
Angelia Nedić
20
104
0
11 Jun 2018
Towards More Efficient Stochastic Decentralized Learning: Faster
  Convergence and Sparse Communication
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
25
56
0
25 May 2018
A Push-Pull Gradient Method for Distributed Optimization in Networks
A Push-Pull Gradient Method for Distributed Optimization in Networks
Shi Pu
Wei Shi
Jinming Xu
A. Nedić
19
98
0
20 Mar 2018
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
C. Hegde
S. Sarkar
FedML
29
179
0
23 Jun 2017
Improved Convergence Rates for Distributed Resource Allocation
Improved Convergence Rates for Distributed Resource Allocation
A. Nedić
Alexander Olshevsky
Wei Shi
19
75
0
16 Jun 2017
A decentralized proximal-gradient method with network independent
  step-sizes and separated convergence rates
A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates
Zhi Li
W. Shi
Ming Yan
12
223
0
25 Apr 2017
Previous
12