ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03371
15
1

Fundamental Limits of Distributed Optimization over Multiple Access Channel

5 October 2023
Shubham K. Jha
ArXivPDFHTML
Abstract

We consider distributed optimization over a ddd-dimensional space, where KKK remote clients send coded gradient estimates over an {\em additive Gaussian Multiple Access Channel (MAC)} with noise variance σz2\sigma_z^2σz2​. Furthermore, the codewords from the clients must satisfy the average power constraint PPP, resulting in a signal-to-noise ratio (SNR) of KP/σz2KP/\sigma_z^2KP/σz2​. In this paper, we study the fundamental limits imposed by MAC on the {convergence rate of any distributed optimization algorithm and design optimal communication schemes to achieve these limits.} Our first result is a lower bound for the convergence rate, showing that communicating over a MAC imposes a slowdown of d/12log⁡(1+\SNR)\sqrt{d/\frac{1}{2}\log(1+\SNR)}d/21​log(1+\SNR)​ on any protocol compared to the centralized setting. Next, we design a computationally tractable {digital} communication scheme that matches the lower bound to a logarithmic factor in KKK when combined with a projected stochastic gradient descent algorithm. At the heart of our communication scheme is carefully combining several compression and modulation ideas such as quantizing along random bases, {\em Wyner-Ziv compression}, {\em modulo-lattice decoding}, and {\em amplitude shift keying.} We also show that analog schemes, which are popular due to their ease of implementation, can give close to optimal convergence rates at low \SNR\SNR\SNR but experience a slowdown of roughly d\sqrt{d}d​ at high \SNR\SNR\SNR.

View on arXiv
Comments on this paper