ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.00881
35
4

FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked RDMA Transmission

1 March 2024
Zeling Zhang
Dongqi Cai
Yiran Zhang
Mengwei Xu
Shangguang Wang
Ao Zhou
ArXivPDFHTML
Abstract

Communication overhead is a significant bottleneck in federated learning (FL), which has been exaggerated with the increasing size of AI models. In this paper, we propose FedRDMA, a communication-efficient cross-silo FL system that integrates RDMA into the FL communication protocol. To overcome the limitations of RDMA in wide-area networks (WANs), FedRDMA divides the updated model into chunks and designs a series of optimization techniques to improve the efficiency and robustness of RDMA-based communication. We implement FedRDMA atop the industrial federated learning framework and evaluate it on a real-world cross-silo FL scenario. The experimental results show that \sys can achieve up to 3.8×\times× speedup in communication efficiency compared to traditional TCP/IP-based FL systems.

View on arXiv
Comments on this paper