Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11601
Cited By
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks
20 June 2020
Lixin Fan
Kam Woh Ng
Ce Ju
Tianyu Zhang
Chang Liu
Chee Seng Chan
Qiang Yang
MIACV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks"
10 / 10 papers shown
Title
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
43
23
0
20 Jul 2023
Theoretically Principled Federated Learning for Balancing Privacy and Utility
Xiaojin Zhang
Wenjie Li
Kai Chen
Shutao Xia
Qian Yang
FedML
25
9
0
24 May 2023
FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation
Hanlin Gu
Jiahuan Luo
Yan Kang
Lixin Fan
Qiang Yang
FedML
36
13
0
30 Jan 2023
Reconstructing Training Data from Model Gradient, Provably
Zihan Wang
Jason D. Lee
Qi Lei
FedML
32
24
0
07 Dec 2022
Analysing Training-Data Leakage from Gradients through Linear Systems and Gradient Matching
Cangxiong Chen
Neill D. F. Campbell
FedML
34
1
0
20 Oct 2022
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
Rui Zhang
Song Guo
Junxiao Wang
Xin Xie
Dacheng Tao
35
36
0
15 Jun 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification
Cangxiong Chen
Neill D. F. Campbell
FedML
17
24
0
19 Nov 2021
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
38
145
0
25 Oct 2021
R-GAP: Recursive Gradient Attack on Privacy
Junyi Zhu
Matthew Blaschko
FedML
6
132
0
15 Oct 2020
1