Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.10880
Cited By
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
19 October 2022
Ruihan Wu
Xiangyu Chen
Chuan Guo
Kilian Q. Weinberger
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning"
18 / 18 papers shown
Title
TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models
Caspar Meijer
Jiyue Huang
Shreshtha Sharma
Elena Lazovik
Lydia Y. Chen
AI4TS
41
0
0
26 Mar 2025
Defending Against Gradient Inversion Attacks for Biomedical Images via Learnable Data Perturbation
Shiyi Jiang
F. Firouzi
Krishnendu Chakrabarty
AAML
MedIm
43
0
0
19 Mar 2025
From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges
Qiongxiu Li
Wenrui Yu
Yufei Xia
Jun Pang
FedML
60
1
0
10 Mar 2025
Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack
Chenxi Dai
Lin Lu
Pan Zhou
50
0
0
22 Feb 2025
Privacy Attack in Federated Learning is Not Easy: An Experimental Study
Hangyu Zhu
Liyuan Huang
Zhenping Xie
FedML
26
0
0
28 Sep 2024
The poison of dimensionality
Lê-Nguyên Hoang
33
2
0
25 Sep 2024
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Renrong Shao
Xiang Li
Yunshi Lan
Ming Gao
Jinlong Shu
FedML
41
2
0
12 Sep 2024
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Zhuohang Li
Andrew Lowy
Jing Liu
T. Koike-Akino
K. Parsons
Bradley Malin
Ye Wang
FedML
38
1
0
29 Aug 2024
Understanding Data Reconstruction Leakage in Federated Learning from a Theoretical Perspective
Zifan Wang
Binghui Zhang
Meng Pang
Yuan Hong
Binghui Wang
FedML
44
0
0
22 Aug 2024
Privacy Preserving Federated Learning in Medical Imaging with Uncertainty Estimation
Nikolas Koutsoubis
Yasin Yilmaz
Ravi P. Ramachandran
M. Schabath
Ghulam Rasool
34
8
0
18 Jun 2024
Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
Weijun Li
Qiongkai Xu
Mark Dras
PILM
32
1
0
03 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
51
11
0
29 May 2024
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy
Yichuan Shi
Olivera Kotevska
Viktor Reshniak
Abhishek Singh
Ramesh Raskar
AAML
43
1
0
16 May 2024
Security and Privacy Challenges of Large Language Models: A Survey
B. Das
M. H. Amini
Yanzhao Wu
PILM
ELM
19
103
0
30 Jan 2024
GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks?
Yu Sun
Gaojian Xiong
Xianxun Yao
Kailang Ma
Jian Cui
24
3
0
22 Jan 2024
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning
Kostadin Garov
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
AAML
FedML
36
7
0
05 Jun 2023
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
73
56
0
29 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
1