ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.09209
  4. Cited By
FLDetector: Defending Federated Learning Against Model Poisoning Attacks
  via Detecting Malicious Clients

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

19 July 2022
Zaixi Zhang
Xiaoyu Cao
Jin Jia
Neil Zhenqiang Gong
    AAML
    FedML
ArXivPDFHTML

Papers citing "FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients"

28 / 78 papers shown
Title
RECESS Vaccine for Federated Learning: Proactive Defense Against Model
  Poisoning Attacks
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
Haonan Yan
Wenjing Zhang
Qian Chen
Xiaoguang Li
Wenhai Sun
Hui Li
Xiao-La Lin
AAML
23
9
0
09 Oct 2023
Resisting Backdoor Attacks in Federated Learning via Bidirectional
  Elections and Individual Perspective
Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective
Zhen Qin
Feiyi Chen
Chen Zhi
Xueqiang Yan
Shuiguang Deng
AAML
FedML
40
4
0
28 Sep 2023
PA-iMFL: Communication-Efficient Privacy Amplification Method against
  Data Reconstruction Attack in Improved Multi-Layer Federated Learning
PA-iMFL: Communication-Efficient Privacy Amplification Method against Data Reconstruction Attack in Improved Multi-Layer Federated Learning
Jianhua Wang
Xiaolin Chang
Jelena Mivsić
Vojislav B. Mivsić
Zhi Chen
Junchao Fan
39
2
0
25 Sep 2023
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat
  Detection System via Autoencoder-based Latent Space Inspection
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection
Tran Duc Luong
Vuong Minh Tien
N. H. Quyen
Do Thi Thu Hien
Phan The Duy
V. Pham
AAML
14
1
0
20 Sep 2023
FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on
  Federated Learning
FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on Federated Learning
Yanqi Qiao
Dazhuang Liu
Congwen Chen
Rui Wang
Kaitai Liang
FedML
AAML
38
1
0
31 Aug 2023
FLShield: A Validation Based Federated Learning Framework to Defend
  Against Poisoning Attacks
FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks
Ehsanul Kabir
Zeyu Song
Md. Rafi Ur Rashid
Shagufta Mehnaz
24
6
0
10 Aug 2023
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
Haomin Zhuang
Mingxian Yu
Hao Wang
Yang Hua
Jian Li
Xu Yuan
FedML
26
9
0
08 Aug 2023
G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
  through Attributed Client Graph Clustering
G2^22uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering
Hao Yu
Chuan Ma
Meng Liu
Tianyu Du
Ming Ding
Tao Xiang
Shouling Ji
Xinwang Liu
AAML
FedML
31
12
0
08 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
27
4
0
06 Jun 2023
Covert Communication Based on the Poisoning Attack in Federated Learning
Covert Communication Based on the Poisoning Attack in Federated Learning
Junchuan Liang
Rong Wang
FedML
34
1
0
02 Jun 2023
Learning Subpocket Prototypes for Generalizable Structure-based Drug
  Design
Learning Subpocket Prototypes for Generalizable Structure-based Drug Design
Zaixin Zhang
Qi Liu
33
34
0
22 May 2023
FedGT: Identification of Malicious Clients in Federated Learning with
  Secure Aggregation
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation
M. Xhemrishi
Johan Ostman
Antonia Wachter-Zeh
Alexandre Graell i Amat
FedML
25
6
0
09 May 2023
Denial-of-Service or Fine-Grained Control: Towards Flexible Model
  Poisoning Attacks on Federated Learning
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Hangtao Zhang
Zeming Yao
L. Zhang
Shengshan Hu
Chao Chen
Alan Liew
Zhetao Li
24
9
0
21 Apr 2023
Protecting Federated Learning from Extreme Model Poisoning Attacks via
  Multidimensional Time Series Anomaly Detection
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection
Edoardo Gabrielli
Dimitri Belli
Vittorio Miori
Gabriele Tolomei
AAML
13
4
0
29 Mar 2023
Backdoor Defense via Deconfounded Representation Learning
Backdoor Defense via Deconfounded Representation Learning
Zaixin Zhang
Qi Liu
Zhicai Wang
Zepu Lu
Qingyong Hu
AAML
57
39
0
13 Mar 2023
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
  and Future Research Directions
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions
Thuy-Dung Nguyen
Tuan Nguyen
Phi Le Nguyen
Hieu H. Pham
Khoa D. Doan
Kok-Seng Wong
AAML
FedML
40
56
0
03 Mar 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
Poisoning Attacks and Defenses in Federated Learning: A Survey
Poisoning Attacks and Defenses in Federated Learning: A Survey
S. Sagar
Chang-Sun Li
S. W. Loke
Jinho Choi
OOD
FedML
18
9
0
14 Jan 2023
AFLGuard: Byzantine-robust Asynchronous Federated Learning
AFLGuard: Byzantine-robust Asynchronous Federated Learning
Minghong Fang
Jia-Wei Liu
Neil Zhenqiang Gong
Elizabeth S. Bentley
AAML
35
25
0
13 Dec 2022
Untargeted Attack against Federated Recommendation Systems via Poisonous
  Item Embeddings and the Defense
Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense
Yang Yu
Qi Liu
Likang Wu
Runlong Yu
Sanshi Lei Yu
Zaixin Zhang
FedML
24
44
0
11 Dec 2022
FedLesScan: Mitigating Stragglers in Serverless Federated Learning
FedLesScan: Mitigating Stragglers in Serverless Federated Learning
M. Elzohairy
Mohak Chadha
Anshul Jindal
Andreas Grafberger
Jiatao Gu
Michael Gerndt
Osama Abboud
FedML
26
7
0
10 Nov 2022
FedRecover: Recovering from Poisoning Attacks in Federated Learning
  using Historical Information
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
Xiaoyu Cao
Jinyuan Jia
Zaixi Zhang
Neil Zhenqiang Gong
FedML
MU
AAML
29
73
0
20 Oct 2022
ScionFL: Efficient and Robust Secure Quantized Aggregation
ScionFL: Efficient and Robust Secure Quantized Aggregation
Y. Ben-Itzhak
Helen Mollering
Benny Pinkas
T. Schneider
Ajith Suresh
Oleksandr Tkachenko
S. Vargaftik
Christian Weinert
Hossein Yalame
Avishay Yanai
35
6
0
13 Oct 2022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
FLCert: Provably Secure Federated Learning against Poisoning Attacks
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
83
59
0
02 Oct 2022
Privacy-Preserving Federated Recurrent Neural Networks
Privacy-Preserving Federated Recurrent Neural Networks
Sinem Sav
Abdulrahman Diaa
Apostolos Pyrgelis
Jean-Philippe Bossuat
Jean-Pierre Hubaux
15
7
0
28 Jul 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
51
42
0
18 Feb 2022
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
117
611
0
27 Dec 2020
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,032
0
29 Nov 2018
Previous
12