ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.04959
110
16
v1v2v3v4v5 (latest)

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

8 June 2023
Shanshan Han
Baturalp Buyukates
Zijian Hu
Han Jin
Weizhao Jin
Lichao Sun
Xiaoyang Sean Wang
Wenxuan Wu
Chulin Xie
Yuhang Yao
Kai Zhang
Qifan Zhang
Yuhui Zhang
Carlee Joe-Wong
Chaoyang He
    SILM
ArXiv (abs)PDFHTML
Abstract

This paper introduces FedMLSecurity, a benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances FedML's capabilities to evaluate security issues and potential remedies in FL. FedMLSecurity comprises two major components: FedMLAttacker that simulates attacks injected during FL training, and FedMLDefender that simulates defensive mechanisms to mitigate the impacts of the attacks. FedMLSecurity is open-sourced and can be customized to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). FedMLSecurity can also be applied to Large Language Models (LLMs) easily, demonstrating its adaptability and applicability in various scenarios.

View on arXiv
Comments on this paper