ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.18352
71
2

FedBaF: Federated Learning Aggregation Biased by a Foundation Model

24 October 2024
Jong-Ik Park
Srinivasa Pranav
J. M. F. Moura
Carlee Joe-Wong
    AI4CE
ArXivPDFHTML
Abstract

Foundation models are now a major focus of leading technology organizations due to their ability to generalize across diverse tasks. Existing approaches for adapting foundation models to new applications often rely on Federated Learning (FL) and disclose the foundation model weights to clients when using it to initialize the global model. While these methods ensure client data privacy, they compromise model and information security. In this paper, we introduce Federated Learning Aggregation Biased by a Foundation Model (FedBaF), a novel method for dynamically integrating pre-trained foundation model weights during the FL aggregation phase. Unlike conventional methods, FedBaF preserves the confidentiality of the foundation model while still leveraging its power to train more accurate models, especially in non-IID and adversarial scenarios. Our comprehensive experiments use Pre-ResNet and foundation models like Vision Transformer to demonstrate that FedBaF not only matches, but often surpasses the test accuracy of traditional weight initialization methods by up to 11.4% in IID and up to 15.8% in non-IID settings. Additionally, FedBaF applied to a Transformer-based language model significantly reduced perplexity by up to 39.2%.

View on arXiv
@article{park2025_2410.18352,
  title={ FedBaF: Federated Learning Aggregation Biased by a Foundation Model },
  author={ Jong-Ik Park and Srinivasa Pranav and José M. F. Moura and Carlee Joe-Wong },
  journal={arXiv preprint arXiv:2410.18352},
  year={ 2025 }
}
Comments on this paper