ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.04959
110
16
v1v2v3v4v5 (latest)

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

8 June 2023
Shanshan Han
Baturalp Buyukates
Zijian Hu
Han Jin
Weizhao Jin
Lichao Sun
Xiaoyang Sean Wang
Wenxuan Wu
Chulin Xie
Yuhang Yao
Yuhui Zhang
Qifan Zhang
Yuhui Zhang
    SILM
ArXiv (abs)PDFHTML
Abstract

This paper introduces FedMLSecurity, a benchmark that simulates adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances the security assessment capacity of FedML. FedMLSecurity comprises two principal components: FedMLAttacker, which simulates attacks injected into FL training, and FedMLDefender, which emulates defensive strategies designed to mitigate the impacts of the attacks. FedMLSecurity is open-sourced 1 and is customizable to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). Experimental evaluations in this paper also demonstrate the ease of application of FedMLSecurity to Large Language Models (LLMs), further reinforcing its versatility and practical utility in various scenarios.

View on arXiv
Comments on this paper