ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.00144
97
10

Backdoor Threats from Compromised Foundation Models to Federated Learning

31 October 2023
Xi Li
Songhe Wang
Chen Henry Wu
Hao Zhou
Jiaqi Wang
ArXivPDFHTML
Abstract

Federated learning (FL) represents a novel paradigm to machine learning, addressing critical issues related to data privacy and security, yet suffering from data insufficiency and imbalance. The emergence of foundation models (FMs) provides a promising solution to the problems with FL. For instance, FMs could serve as teacher models or good starting points for FL. However, the integration of FM in FL presents a new challenge, exposing the FL systems to potential threats. This paper investigates the robustness of FL incorporating FMs by assessing their susceptibility to backdoor attacks. Contrary to classic backdoor attacks against FL, the proposed attack (1) does not require the attacker fully involved in the FL process; (2) poses a significant risk in practical FL scenarios; (3) is able to evade existing robust FL frameworks/ FL backdoor defenses; (4) underscores the researches on the robustness of FL systems integrated with FMs. The effectiveness of the proposed attack is demonstrated by extensive experiments with various well-known models and benchmark datasets encompassing both text and image classification domains.

View on arXiv
Comments on this paper