ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07927
  4. Cited By
The Hidden Vulnerability of Distributed Learning in Byzantium

The Hidden Vulnerability of Distributed Learning in Byzantium

22 February 2018
El-Mahdi El-Mhamdi
R. Guerraoui
Sébastien Rouault
    AAML
    FedML
ArXivPDFHTML

Papers citing "The Hidden Vulnerability of Distributed Learning in Byzantium"

50 / 137 papers shown
Title
Robust Learning Protocol for Federated Tumor Segmentation Challenge
Robust Learning Protocol for Federated Tumor Segmentation Challenge
Ambrish Rawat
Giulio Zizzo
S. Kadhe
J. Epperlein
S. Braghin
FedML
34
3
0
16 Dec 2022
Navigation as Attackers Wish? Towards Building Robust Embodied Agents
  under Federated Learning
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning
Yunchao Zhang
Zonglin Di
KAI-QING Zhou
Cihang Xie
Xin Eric Wang
FedML
AAML
31
2
0
27 Nov 2022
FedCut: A Spectral Analysis Framework for Reliable Detection of
  Byzantine Colluders
FedCut: A Spectral Analysis Framework for Reliable Detection of Byzantine Colluders
Hanlin Gu
Lixin Fan
Xingxing Tang
Qiang Yang
AAML
FedML
22
1
0
24 Nov 2022
Byzantine Spectral Ranking
Byzantine Spectral Ranking
Arnhav Datar
A. Rajkumar
Jonathan C. Augustine
28
4
0
15 Nov 2022
Robust Distributed Learning Against Both Distributional Shifts and
  Byzantine Attacks
Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks
Guanqiang Zhou
Ping Xu
Yue Wang
Zhi Tian
OOD
FedML
33
4
0
29 Oct 2022
Secure Distributed Optimization Under Gradient Attacks
Secure Distributed Optimization Under Gradient Attacks
Shuhua Yu
S. Kar
32
14
0
28 Oct 2022
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
  Learning
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Kaiyuan Zhang
Guanhong Tao
Qiuling Xu
Shuyang Cheng
Shengwei An
...
Shiwei Feng
Guangyu Shen
Pin-Yu Chen
Shiqing Ma
Xiangyu Zhang
FedML
42
52
0
23 Oct 2022
FedRecover: Recovering from Poisoning Attacks in Federated Learning
  using Historical Information
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
Xiaoyu Cao
Jinyuan Jia
Zaixi Zhang
Neil Zhenqiang Gong
FedML
MU
AAML
29
73
0
20 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
54
5
0
19 Oct 2022
Linear Scalarization for Byzantine-robust learning on non-IID data
Linear Scalarization for Byzantine-robust learning on non-IID data
Latifa Errami
El Houcine Bergou
AAML
29
0
0
15 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
37
31
0
30 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
48
13
0
08 Sep 2022
Network-Level Adversaries in Federated Learning
Network-Level Adversaries in Federated Learning
Giorgio Severi
Matthew Jagielski
Gokberk Yar
Yuxuan Wang
Alina Oprea
Cristina Nita-Rotaru
FedML
28
17
0
27 Aug 2022
A simplified convergence theory for Byzantine resilient stochastic
  gradient descent
A simplified convergence theory for Byzantine resilient stochastic gradient descent
Lindon Roberts
E. Smyth
31
3
0
25 Aug 2022
MUDGUARD: Taming Malicious Majorities in Federated Learning using
  Privacy-Preserving Byzantine-Robust Clustering
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang
Xingkai Wang
H. Chen
Jérémie Decouchant
S. Picek
Ziqiang Liu
K. Liang
38
1
0
22 Aug 2022
Byzantines can also Learn from History: Fall of Centered Clipping in
  Federated Learning
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
Kerem Ozfatura
Emre Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
FedML
38
13
0
21 Aug 2022
FedPerm: Private and Robust Federated Learning by Parameter Permutation
FedPerm: Private and Robust Federated Learning by Parameter Permutation
Hamid Mozaffari
Virendra J. Marathe
D. Dice
FedML
27
4
0
16 Aug 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
33
3
0
18 Jul 2022
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Naif Alkhunaizi
Dmitry Kamzolov
Martin Takávc
Karthik Nandakumar
OOD
26
9
0
15 Jul 2022
Enhanced Security and Privacy via Fragmented Federated Learning
Enhanced Security and Privacy via Fragmented Federated Learning
N. Jebreel
J. Domingo-Ferrer
Alberto Blanco-Justicia
David Sánchez
FedML
36
26
0
13 Jul 2022
zPROBE: Zero Peek Robustness Checks for Federated Learning
zPROBE: Zero Peek Robustness Checks for Federated Learning
Zahra Ghodsi
Mojan Javaheripi
Nojan Sheybani
Xinqiao Zhang
Ke Huang
F. Koushanfar
FedML
50
17
0
24 Jun 2022
Neurotoxin: Durable Backdoors in Federated Learning
Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang
Ashwinee Panda
Linyue Song
Yaoqing Yang
Michael W. Mahoney
Joseph E. Gonzalez
Kannan Ramchandran
Prateek Mittal
FedML
38
130
0
12 Jun 2022
Byzantine-Resilient Decentralized Stochastic Optimization with Robust
  Aggregation Rules
Byzantine-Resilient Decentralized Stochastic Optimization with Robust Aggregation Rules
Zhaoxian Wu
Tianyi Chen
Qing Ling
31
36
0
09 Jun 2022
VeriFi: Towards Verifiable Federated Unlearning
VeriFi: Towards Verifiable Federated Unlearning
Xiangshan Gao
Xingjun Ma
Jingyi Wang
Youcheng Sun
Bo Li
S. Ji
Peng Cheng
Jiming Chen
MU
73
46
0
25 May 2022
Byzantine-Robust Federated Learning with Optimal Statistical Rates and
  Privacy Guarantees
Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees
Banghua Zhu
Lun Wang
Qi Pang
Shuai Wang
Jiantao Jiao
D. Song
Michael I. Jordan
FedML
98
30
0
24 May 2022
Robust Quantity-Aware Aggregation for Federated Learning
Robust Quantity-Aware Aggregation for Federated Learning
Jingwei Yi
Fangzhao Wu
Huishuai Zhang
Bin Zhu
Tao Qi
Guangzhong Sun
Xing Xie
FedML
33
2
0
22 May 2022
Federated Multi-Armed Bandits Under Byzantine Attacks
Federated Multi-Armed Bandits Under Byzantine Attacks
Artun Saday
Ilker Demirel
Yiğit Yıldırım
Cem Tekin
AAML
37
13
0
09 May 2022
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Djamila Bouhata
Hamouma Moumen
Moumen Hamouma
Ahcène Bounceur
AI4CE
27
7
0
05 May 2022
Adversarial Analysis of the Differentially-Private Federated Learning in
  Cyber-Physical Critical Infrastructures
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures
Md Tamjid Hossain
S. Badsha
Hung M. La
Haoting Shen
Shafkat Islam
Ibrahim Khalil
X. Yi
AAML
21
3
0
06 Apr 2022
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward
  Error Analysis
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
Yuwei Sun
H. Ochiai
Jun Sakuma
AAML
FedML
43
15
0
22 Mar 2022
MPAF: Model Poisoning Attacks to Federated Learning based on Fake
  Clients
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
Xiaoyu Cao
Neil Zhenqiang Gong
26
108
0
16 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Yuxi Mi
Yiheng Sun
Jihong Guan
Shuigeng Zhou
AAML
FedML
16
1
0
09 Feb 2022
Securing Federated Sensitive Topic Classification against Poisoning
  Attacks
Securing Federated Sensitive Topic Classification against Poisoning Attacks
Tianyue Chu
Álvaro García-Recuero
Costas Iordanou
Georgios Smaragdakis
Nikolaos Laoutaris
44
9
0
31 Jan 2022
Survey on Federated Learning Threats: concepts, taxonomy on attacks and
  defences, experimental study and challenges
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges
Nuria Rodríguez-Barroso
Daniel Jiménez López
M. V. Luzón
Francisco Herrera
Eugenio Martínez-Cámara
FedML
37
212
0
20 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
33
0
0
18 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
41
92
0
08 Jan 2022
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
  Deep Model Inspection
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Phillip Rieger
T. D. Nguyen
Markus Miettinen
A. Sadeghi
FedML
AAML
38
151
0
03 Jan 2022
Challenges and Approaches for Mitigating Byzantine Attacks in Federated
  Learning
Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning
Junyu Shi
Wei Wan
Shengshan Hu
Jianrong Lu
L. Zhang
AAML
39
74
0
29 Dec 2021
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive
  Survey
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
  Sparsification
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
17
85
0
12 Dec 2021
ARFED: Attack-Resistant Federated averaging based on outlier elimination
ARFED: Attack-Resistant Federated averaging based on outlier elimination
Ece Isik Polat
Gorkem Polat
Altan Koçyiğit
AAML
FedML
41
10
0
08 Nov 2021
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in
  Federated Learning from a Client Perspective
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Jingwei Sun
Ang Li
Louis DiValentin
Amin Hassanzadeh
Yiran Chen
H. Li
FedML
OOD
AAML
36
76
0
26 Oct 2021
MANDERA: Malicious Node Detection in Federated Learning via Ranking
MANDERA: Malicious Node Detection in Federated Learning via Ranking
Wanchuang Zhu
Benjamin Zi Hao Zhao
Simon Luo
Tongliang Liu
Kefeng Deng
AAML
29
8
0
22 Oct 2021
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d.
  Environments
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments
Joost Verbraeken
M. Vos
J. Pouwelse
31
4
0
21 Oct 2021
PipAttack: Poisoning Federated Recommender Systems forManipulating Item
  Promotion
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion
Shijie Zhang
Hongzhi Yin
Tong Chen
Zi Huang
Quoc Viet Hung Nguyen
Li-zhen Cui
FedML
AAML
16
96
0
21 Oct 2021
TESSERACT: Gradient Flip Score to Secure Federated Learning Against
  Model Poisoning Attacks
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
Atul Sharma
Wei Chen
Joshua C. Zhao
Qiang Qiu
Somali Chaterji
S. Bagchi
FedML
AAML
54
5
0
19 Oct 2021
BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated
  Learning against Byzantine Attackers
BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers
Xin-Yue Fan
Yue Wang
Yan Huo
Zhi Tian
FedML
22
23
0
18 Oct 2021
Combining Differential Privacy and Byzantine Resilience in Distributed
  SGD
Combining Differential Privacy and Byzantine Resilience in Distributed SGD
R. Guerraoui
Nirupam Gupta
Rafael Pinot
Sébastien Rouault
John Stephan
FedML
43
4
0
08 Oct 2021
Solon: Communication-efficient Byzantine-resilient Distributed Training
  via Redundant Gradients
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients
Lingjiao Chen
Leshang Chen
Hongyi Wang
S. Davidson
Yan Sun
FedML
37
1
0
04 Oct 2021
Previous
123
Next