Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.11815
Cited By
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
26 November 2019
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Local Model Poisoning Attacks to Byzantine-Robust Federated Learning"
50 / 184 papers shown
Title
CrowdGuard: Federated Backdoor Detection in Federated Learning
Phillip Rieger
T. Krauß
Markus Miettinen
Alexandra Dmitrienko
Ahmad-Reza Sadeghi Technical University Darmstadt
AAML
FedML
32
22
0
14 Oct 2022
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
83
1
0
14 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
48
13
0
08 Sep 2022
Network-Level Adversaries in Federated Learning
Giorgio Severi
Matthew Jagielski
Gokberk Yar
Yuxuan Wang
Alina Oprea
Cristina Nita-Rotaru
FedML
28
17
0
27 Aug 2022
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang
Xingkai Wang
H. Chen
Jérémie Decouchant
S. Picek
Zichen Liu
K. Liang
38
1
0
22 Aug 2022
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
Kerem Ozfatura
Emre Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
FedML
41
13
0
21 Aug 2022
FedPerm: Private and Robust Federated Learning by Parameter Permutation
Hamid Mozaffari
Virendra J. Marathe
D. Dice
FedML
27
4
0
16 Aug 2022
Federated Learning for Medical Applications: A Taxonomy, Current Trends, Challenges, and Future Research Directions
A. Rauniyar
D. Hagos
Debesh Jha
J. E. Haakegaard
Ulas Bagci
D. Rawat
Vladimir Vlassov
OOD
48
91
0
05 Aug 2022
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Zaixi Zhang
Xiaoyu Cao
Jin Jia
Neil Zhenqiang Gong
AAML
FedML
24
214
0
19 Jul 2022
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning
Hua Ma
Qun Li
Yifeng Zheng
Zhi Zhang
Xiaoning Liu
Yan Gao
S. Al-Sarawi
Derek Abbott
FedML
39
3
0
19 Jul 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
38
3
0
18 Jul 2022
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Naif Alkhunaizi
Dmitry Kamzolov
Martin Takávc
Karthik Nandakumar
OOD
26
9
0
15 Jul 2022
Enhanced Security and Privacy via Fragmented Federated Learning
N. Jebreel
J. Domingo-Ferrer
Alberto Blanco-Justicia
David Sánchez
FedML
39
26
0
13 Jul 2022
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
Ehsan Hallaji
R. Razavi-Far
M. Saif
AAML
FedML
27
13
0
05 Jul 2022
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
AAML
FedML
MedIm
44
12
0
02 Jul 2022
FLVoogd: Robust And Privacy Preserving Federated Learning
Yuhang Tian
Rui Wang
Yan Qiao
E. Panaousis
K. Liang
FedML
28
4
0
24 Jun 2022
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
26
7
0
18 Jun 2022
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
49
8
0
14 Jun 2022
Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees
Banghua Zhu
Lun Wang
Qi Pang
Shuai Wang
Jiantao Jiao
D. Song
Michael I. Jordan
FedML
98
30
0
24 May 2022
Robust Quantity-Aware Aggregation for Federated Learning
Jingwei Yi
Fangzhao Wu
Huishuai Zhang
Bin Zhu
Tao Qi
Guangzhong Sun
Xing Xie
FedML
35
2
0
22 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
33
34
0
13 May 2022
Federated Multi-Armed Bandits Under Byzantine Attacks
Artun Saday
Ilker Demirel
Yiğit Yıldırım
Cem Tekin
AAML
37
13
0
09 May 2022
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling
Kiyoon Yoo
Nojun Kwak
SILM
AAML
FedML
25
19
0
29 Apr 2022
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Dazhong Rong
Qinming He
Jianhai Chen
FedML
27
41
0
26 Apr 2022
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures
Md Tamjid Hossain
S. Badsha
Hung M. La
Haoting Shen
Shafkat Islam
Ibrahim Khalil
X. Yi
AAML
29
3
0
06 Apr 2022
FedRecAttack: Model Poisoning Attack to Federated Recommendation
Dazhong Rong
Shuai Ye
Ruoyan Zhao
Hon Ning Yuen
Jianhai Chen
Qinming He
AAML
FedML
24
57
0
01 Apr 2022
MixNN: A design for protecting deep learning models
Chao Liu
Hao Chen
Yusen Wu
Rui Jin
12
0
0
28 Mar 2022
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing
Dinh C. Nguyen
Seyyedali Hosseinalipour
David J. Love
P. Pathirana
Christopher G. Brinton
34
47
0
18 Mar 2022
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
Xiaoyu Cao
Neil Zhenqiang Gong
26
108
0
16 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
Jing Xu
Rui Wang
Stefanos Koffas
K. Liang
S. Picek
FedML
AAML
39
25
0
07 Feb 2022
Securing Federated Sensitive Topic Classification against Poisoning Attacks
Tianyue Chu
Álvaro García-Recuero
Costas Iordanou
Georgios Smaragdakis
Nikolaos Laoutaris
51
9
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
38
7
0
28 Jan 2022
FedComm: Federated Learning as a Medium for Covert Communication
Dorjan Hitaj
Giulio Pagnotta
Briland Hitaj
Fernando Perez-Cruz
L. Mancini
FedML
32
10
0
21 Jan 2022
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges
Nuria Rodríguez-Barroso
Daniel Jiménez López
M. V. Luzón
Francisco Herrera
Eugenio Martínez-Cámara
FedML
37
213
0
20 Jan 2022
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
41
92
0
08 Jan 2022
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Phillip Rieger
T. D. Nguyen
Markus Miettinen
A. Sadeghi
FedML
AAML
41
152
0
03 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Analysis and Evaluation of Synchronous and Asynchronous FLchain
F. Wilhelmi
L. Giupponi
Paolo Dini
25
5
0
15 Dec 2021
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
17
85
0
12 Dec 2021
ARFED: Attack-Resistant Federated averaging based on outlier elimination
Ece Isik Polat
Gorkem Polat
Altan Koçyiğit
AAML
FedML
41
10
0
08 Nov 2021
DFL: High-Performance Blockchain-Based Federated Learning
Yongding Tian
Zhuoran Guo
Jiaxuan Zhang
Zaid Al-Ars
OOD
FedML
31
10
0
28 Oct 2021
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Jingwei Sun
Ang Li
Louis DiValentin
Amin Hassanzadeh
Yiran Chen
H. Li
FedML
OOD
AAML
36
76
0
26 Oct 2021
MANDERA: Malicious Node Detection in Federated Learning via Ranking
Wanchuang Zhu
Benjamin Zi Hao Zhao
Simon Luo
Tongliang Liu
Kefeng Deng
AAML
29
8
0
22 Oct 2021
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments
Joost Verbraeken
M. Vos
J. Pouwelse
31
4
0
21 Oct 2021
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion
Shijie Zhang
Hongzhi Yin
Tong Chen
Zi Huang
Quoc Viet Hung Nguyen
Li-zhen Cui
FedML
AAML
19
96
0
21 Oct 2021
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
Atul Sharma
Wei Chen
Joshua C. Zhao
Qiang Qiu
Somali Chaterji
S. Bagchi
FedML
AAML
54
5
0
19 Oct 2021
SecFL: Confidential Federated Learning using TEEs
D. Quoc
Christof Fetzer
FedML
24
16
0
03 Oct 2021
DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning
Md Tamjid Hossain
Shafkat Islam
S. Badsha
Haoting Shen
AAML
55
41
0
21 Sep 2021
Previous
1
2
3
4
Next