ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.14246
75
0

Trading-off Accuracy and Communication Cost in Federated Learning

18 March 2025
Mattia Jacopo Villani
Emanuele Natale
Frederik Mallmann-Trenn
    FedML
ArXivPDFHTML
Abstract

Leveraging the training-by-pruning paradigm introduced by Zhou et al. and Isik et al. introduced a federated learning protocol that achieves a 34-fold reduction in communication cost. We achieve a compression improvements of orders of orders of magnitude over the state-of-the-art. The central idea of our framework is to encode the network weights w⃗\vec ww by a the vector of trainable parameters p⃗\vec pp​, such that w⃗=Q⋅p⃗\vec w = Q\cdot \vec pw=Q⋅p​ where QQQ is a carefully-generate sparse random matrix (that remains fixed throughout training). In such framework, the previous work of Zhou et al. [NeurIPS'19] is retrieved when QQQ is diagonal and p⃗\vec pp​ has the same dimension of w⃗\vec ww. We instead show that p⃗\vec pp​ can effectively be chosen much smaller than w⃗\vec ww, while retaining the same accuracy at the price of a decrease of the sparsity of QQQ. Since server and clients only need to share p⃗\vec pp​, such a trade-off leads to a substantial improvement in communication cost. Moreover, we provide theoretical insight into our framework and establish a novel link between training-by-sampling and random convex geometry.

View on arXiv
@article{villani2025_2503.14246,
  title={ Trading-off Accuracy and Communication Cost in Federated Learning },
  author={ Mattia Jacopo Villani and Emanuele Natale and Frederik Mallmann-Trenn },
  journal={arXiv preprint arXiv:2503.14246},
  year={ 2025 }
}
Comments on this paper