Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.07415
Cited By
Client-side Gradient Inversion Against Federated Learning from Poisoning
14 September 2023
Jiaheng Wei
Yanjun Zhang
Leo Yu Zhang
Chao Chen
Shirui Pan
Kok-Leong Ong
Jinchao Zhang
Yang Xiang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Client-side Gradient Inversion Against Federated Learning from Poisoning"
7 / 7 papers shown
Title
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy
Yichuan Shi
Olivera Kotevska
Viktor Reshniak
Abhishek Singh
Ramesh Raskar
AAML
43
1
0
16 May 2024
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Zirui Gong
Liyue Shen
Yanjun Zhang
Leo Yu Zhang
Jingwei Wang
Guangdong Bai
Yong Xiang
AAML
31
6
0
13 Nov 2023
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
86
92
0
01 Feb 2022
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
Lukas Struppek
Dominik Hintersdorf
Antonio De Almeida Correia
Antonia Adler
Kristian Kersting
MIACV
60
62
0
28 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
117
611
0
27 Dec 2020
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
182
1,032
0
29 Nov 2018
1