ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.00232
66
38
v1v2v3 (latest)

Optimal Algorithms for Distributed Optimization

1 December 2017
César A. Uribe
Soomin Lee
Alexander Gasnikov
A. Nedić
ArXiv (abs)PDFHTML
Abstract

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function F(\xb)≜∑i=1mfi(\xb)F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)F(\xb)≜∑i=1m​fi​(\xb) is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

View on arXiv
Comments on this paper