ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07221
  4. Cited By
Tracing Back the Malicious Clients in Poisoning Attacks to Federated
  Learning

Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning

9 July 2024
Yuqi Jia
Minghong Fang
Hongbin Liu
Jinghuai Zhang
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning"

5 / 5 papers shown
Title
UTrace: Poisoning Forensics for Private Collaborative Learning
UTrace: Poisoning Forensics for Private Collaborative Learning
Evan Rose
Hidde Lycklama
Harsh Chaudhari
Anwar Hithnawi
Alina Oprea
45
1
0
23 Sep 2024
FLCert: Provably Secure Federated Learning against Poisoning Attacks
FLCert: Provably Secure Federated Learning against Poisoning Attacks
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
77
59
0
02 Oct 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
94
50
0
13 Oct 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
117
611
0
27 Dec 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1