ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.15723
61
8

Federated Large Language Models: Current Progress and Future Directions

24 September 2024
Yuhang Yao
Jianyi Zhang
Junda Wu
Chengkai Huang
Yu Xia
Tong Yu
Ruiyi Zhang
Sungchul Kim
Ryan A. Rossi
Ang Li
Lina Yao
Julian McAuley
Yiran Chen
Carlee Joe-Wong
    FedML
    AIFin
ArXivPDFHTML
Abstract

Large language models are rapidly gaining popularity and have been widely adopted in real-world applications. While the quality of training data is essential, privacy concerns arise during data collection. Federated learning offers a solution by allowing multiple clients to collaboratively train LLMs without sharing local data. However, FL introduces new challenges, such as model convergence issues due to heterogeneous data and high communication costs. A comprehensive study is required to address these challenges and guide future research. This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions. We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges. We finally propose potential research directions for federated LLMs, including pre-training and how LLMs can further enhance federated learning.

View on arXiv
Comments on this paper