ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.02729
8
52

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

8 April 2018
Haoran Sun
Mingyi Hong
ArXivPDFHTML
Abstract

We consider a class of popular distributed non-convex optimization problems, in which agents connected by a network G\mathcal{G}G collectively optimize a sum of smooth (possibly non-convex) local objective functions. We address the following question: if the agents can only access the gradients of local functions, what are the fastest rates that any distributed algorithms can achieve, and how to achieve those rates. First, we show that there exist difficult problem instances, such that it takes a class of distributed first-order methods at least O(1/ξ(G)×Lˉ/ϵ)\mathcal{O}(1/\sqrt{\xi(\mathcal{G})} \times \bar{L} /{\epsilon})O(1/ξ(G)​×Lˉ/ϵ) communication rounds to achieve certain ϵ\epsilonϵ-solution [where ξ(G)\xi(\mathcal{G})ξ(G) denotes the spectral gap of the graph Laplacian matrix, and Lˉ\bar{L}Lˉ is some Lipschitz constant]. Second, we propose (near) optimal methods whose rates match the developed lower rate bound (up to a polylog factor). The key in the algorithm design is to properly embed the classical polynomial filtering techniques into modern first-order algorithms. To the best of our knowledge, this is the first time that lower rate bounds and optimal methods have been developed for distributed non-convex optimization problems.

View on arXiv
Comments on this paper