ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.13995
  4. Cited By
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

27 December 2020
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
    FedML
ArXivPDFHTML

Papers citing "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"

23 / 73 papers shown
Title
FedRecover: Recovering from Poisoning Attacks in Federated Learning
  using Historical Information
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
Xiaoyu Cao
Jinyuan Jia
Zaixi Zhang
Neil Zhenqiang Gong
FedML
MU
AAML
23
73
0
20 Oct 2022
Multi-trainer Interactive Reinforcement Learning System
Multi-trainer Interactive Reinforcement Learning System
Zhao Guo
Timothy J. Norman
E. Gerding
8
2
0
14 Oct 2022
Cerberus: Exploring Federated Prediction of Security Events
Cerberus: Exploring Federated Prediction of Security Events
Mohammad Naseri
Yufei Han
Enrico Mariconti
Yun Shen
Gianluca Stringhini
Emiliano De Cristofaro
FedML
45
14
0
07 Sep 2022
Network-Level Adversaries in Federated Learning
Network-Level Adversaries in Federated Learning
Giorgio Severi
Matthew Jagielski
Gokberk Yar
Yuxuan Wang
Alina Oprea
Cristina Nita-Rotaru
FedML
20
17
0
27 Aug 2022
MUDGUARD: Taming Malicious Majorities in Federated Learning using
  Privacy-Preserving Byzantine-Robust Clustering
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang
Xingkai Wang
H. Chen
Jérémie Decouchant
S. Picek
Ziqiang Liu
K. Liang
36
1
0
22 Aug 2022
Byzantines can also Learn from History: Fall of Centered Clipping in
  Federated Learning
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
Kerem Ozfatura
Emre Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
FedML
28
13
0
21 Aug 2022
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized
  Federated Learning with Heterogeneous Data
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Xin Zhang
Minghong Fang
Zhuqing Liu
Haibo Yang
Jia-Wei Liu
Zhengyuan Zhu
FedML
20
14
0
17 Aug 2022
FLDetector: Defending Federated Learning Against Model Poisoning Attacks
  via Detecting Malicious Clients
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Zaixi Zhang
Xiaoyu Cao
Jin Jia
Neil Zhenqiang Gong
AAML
FedML
13
214
0
19 Jul 2022
PASS: A Parameter Audit-based Secure and Fair Federated Learning Scheme
  against Free-Rider Attack
PASS: A Parameter Audit-based Secure and Fair Federated Learning Scheme against Free-Rider Attack
Jianhua Wang
Xiaolin Chang
J. Misic
Vojislav B. Mišić
Yixiang Wang
16
7
0
15 Jul 2022
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning
  Using a Lazy Influence Approximation
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning Using a Lazy Influence Approximation
Ljubomir Rokvic
Panayiotis Danassis
Sai Praneeth Karimireddy
Boi Faltings
TDI
27
1
0
23 May 2022
MPAF: Model Poisoning Attacks to Federated Learning based on Fake
  Clients
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
Xiaoyu Cao
Neil Zhenqiang Gong
15
108
0
16 Mar 2022
More is Better (Mostly): On the Backdoor Attacks in Federated Graph
  Neural Networks
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
Jing Xu
Rui Wang
Stefanos Koffas
K. Liang
S. Picek
FedML
AAML
31
25
0
07 Feb 2022
Securing Federated Sensitive Topic Classification against Poisoning
  Attacks
Securing Federated Sensitive Topic Classification against Poisoning Attacks
Tianyue Chu
Álvaro García-Recuero
Costas Iordanou
Georgios Smaragdakis
Nikolaos Laoutaris
35
9
0
31 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive
  Survey
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
MANDERA: Malicious Node Detection in Federated Learning via Ranking
MANDERA: Malicious Node Detection in Federated Learning via Ranking
Wanchuang Zhu
Benjamin Zi Hao Zhao
Simon Luo
Tongliang Liu
Kefeng Deng
AAML
26
8
0
22 Oct 2021
TESSERACT: Gradient Flip Score to Secure Federated Learning Against
  Model Poisoning Attacks
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
Atul Sharma
Wei Chen
Joshua C. Zhao
Qiang Qiu
Somali Chaterji
S. Bagchi
FedML
AAML
46
5
0
19 Oct 2021
Federated Learning via Plurality Vote
Federated Learning via Plurality Vote
Kai Yue
Richeng Jin
Chau-Wai Wong
H. Dai
FedML
24
8
0
06 Oct 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
47
34
0
18 Feb 2021
FLAME: Taming Backdoors in Federated Learning (Extended Version 1)
FLAME: Taming Backdoors in Federated Learning (Extended Version 1)
T. D. Nguyen
Phillip Rieger
Huili Chen
Hossein Yalame
Helen Mollering
...
Azalia Mirhoseini
S. Zeitouni
F. Koushanfar
A. Sadeghi
T. Schneider
AAML
19
26
0
06 Jan 2021
A Reputation Mechanism Is All You Need: Collaborative Fairness and
  Adversarial Robustness in Federated Learning
A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
Xinyi Xu
Lingjuan Lyu
FedML
25
69
0
20 Nov 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
22
63
0
25 Feb 2020
Certified Robustness of Community Detection against Adversarial
  Structural Perturbation via Randomized Smoothing
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
83
83
0
09 Feb 2020
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
182
1,032
0
29 Nov 2018
Previous
12