ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17471
23
1

Momentum-Based Federated Reinforcement Learning with Interaction and Communication Efficiency

24 May 2024
Sheng Yue
Xingyuan Hua
Lili Chen
Ju Ren
ArXivPDFHTML
Abstract

Federated Reinforcement Learning (FRL) has garnered increasing attention recently. However, due to the intrinsic spatio-temporal non-stationarity of data distributions, the current approaches typically suffer from high interaction and communication costs. In this paper, we introduce a new FRL algorithm, named MFPO\texttt{MFPO}MFPO, that utilizes momentum, importance sampling, and additional server-side adjustment to control the shift of stochastic policy gradients and enhance the efficiency of data utilization. We prove that by proper selection of momentum parameters and interaction frequency, MFPO\texttt{MFPO}MFPO can achieve O~(HN−1ϵ−3/2)\tilde{\mathcal{O}}(H N^{-1}\epsilon^{-3/2})O~(HN−1ϵ−3/2) and O~(ϵ−1)\tilde{\mathcal{O}}(\epsilon^{-1})O~(ϵ−1) interaction and communication complexities (NNN represents the number of agents), where the interaction complexity achieves linear speedup with the number of agents, and the communication complexity aligns the best achievable of existing first-order FL algorithms. Extensive experiments corroborate the substantial performance gains of MFPO\texttt{MFPO}MFPO over existing methods on a suite of complex and high-dimensional benchmarks.

View on arXiv
Comments on this paper