Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.07733
Cited By
R-GAP: Recursive Gradient Attack on Privacy
15 October 2020
Junyi Zhu
Matthew Blaschko
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"R-GAP: Recursive Gradient Attack on Privacy"
22 / 22 papers shown
Title
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
63
3
0
24 May 2024
On the Efficiency of Privacy Attacks in Federated Learning
Nawrin Tabassum
Ka-Ho Chow
Xuyu Wang
Wenbin Zhang
Yanzhao Wu
FedML
37
1
0
15 Apr 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Privacy-preserving quantum federated learning via gradient hiding
Changhao Li
Niraj Kumar
Zhixin Song
Shouvanik Chakrabarti
Marco Pistoia
FedML
33
20
0
07 Dec 2023
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Xiaoxiao Sun
Nidham Gazagnadou
Vivek Sharma
Lingjuan Lyu
Hongdong Li
Liang Zheng
47
8
0
22 Sep 2023
A Survey for Federated Learning Evaluations: Goals and Measures
Di Chai
Leye Wang
Liu Yang
Junxue Zhang
Kai Chen
Qian Yang
ELM
FedML
24
21
0
23 Aug 2023
Reconstructing Training Data from Model Gradient, Provably
Zihan Wang
Jason D. Lee
Qi Lei
FedML
32
24
0
07 Dec 2022
Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction
Mohamed Suliman
D. Leith
SILM
FedML
26
7
0
30 Oct 2022
Analysing Training-Data Leakage from Gradients through Linear Systems and Gradient Matching
Cangxiong Chen
Neill D. F. Campbell
FedML
34
1
0
20 Oct 2022
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
Rui Zhang
Song Guo
Junxiao Wang
Xin Xie
Dacheng Tao
35
36
0
15 Jun 2022
Combinatorial optimization for low bit-width neural networks
Hanxu Zhou
Aida Ashrafi
Matthew B. Blaschko
MQ
24
0
0
04 Jun 2022
Recovering Private Text in Federated Learning of Language Models
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
38
74
0
17 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
AGIC: Approximate Gradient Inversion Attack on Federated Learning
Jin Xu
Chi Hong
Jiyue Huang
L. Chen
Jérémie Decouchant
AAML
FedML
29
21
0
28 Apr 2022
LAMP: Extracting Text from Gradients with Language Model Priors
Mislav Balunović
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
27
57
0
17 Feb 2022
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
92
92
0
01 Feb 2022
Gradient Leakage Attack Resilient Deep Learning
Wenqi Wei
Ling Liu
SILM
PILM
AAML
27
46
0
25 Dec 2021
Improving Differentially Private SGD via Randomly Sparsified Gradients
Junyi Zhu
Matthew B. Blaschko
30
5
0
01 Dec 2021
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Yangsibo Huang
Samyak Gupta
Zhao Song
Kai Li
Sanjeev Arora
FedML
AAML
SILM
28
269
0
30 Nov 2021
Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification
Cangxiong Chen
Neill D. F. Campbell
FedML
19
24
0
19 Nov 2021
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
38
145
0
25 Oct 2021
FedML: A Research Library and Benchmark for Federated Machine Learning
Chaoyang He
Songze Li
Jinhyun So
Xiao Zeng
Mi Zhang
...
Yang Liu
Ramesh Raskar
Qiang Yang
M. Annavaram
Salman Avestimehr
FedML
168
564
0
27 Jul 2020
1