Federated Low-Rank Adaptation for Foundation Models: A Survey

Effectively leveraging private datasets remains a significant challenge in developing foundation models. Federated Learning (FL) has recently emerged as a collaborative framework that enables multiple users to fine-tune these models while mitigating data privacy risks. Meanwhile, Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters. This survey examines how LoRA has been integrated into federated fine-tuning for foundation models, an area we term FedLoRA, by focusing on three key challenges: distributed learning, heterogeneity, and efficiency. We further categorize existing work based on the specific methods used to address each challenge. Finally, we discuss open research questions and highlight promising directions for future investigation, outlining the next steps for advancing FedLoRA.
View on arXiv@article{yang2025_2505.13502, title={ Federated Low-Rank Adaptation for Foundation Models: A Survey }, author={ Yiyuan Yang and Guodong Long and Qinghua Lu and Liming Zhu and Jing Jiang and Chengqi Zhang }, journal={arXiv preprint arXiv:2505.13502}, year={ 2025 } }