ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11601
  4. Cited By
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
  Privacy Attacks

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

20 June 2020
Lixin Fan
Kam Woh Ng
Ce Ju
Tianyu Zhang
Chang Liu
Chee Seng Chan
Qiang Yang
    MIACV
ArXivPDFHTML

Papers citing "Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks"

10 / 10 papers shown
Title
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
43
23
0
20 Jul 2023
Theoretically Principled Federated Learning for Balancing Privacy and
  Utility
Theoretically Principled Federated Learning for Balancing Privacy and Utility
Xiaojin Zhang
Wenjie Li
Kai Chen
Shutao Xia
Qian Yang
FedML
25
9
0
24 May 2023
FedPass: Privacy-Preserving Vertical Federated Deep Learning with
  Adaptive Obfuscation
FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation
Hanlin Gu
Jiahuan Luo
Yan Kang
Lixin Fan
Qiang Yang
FedML
36
13
0
30 Jan 2023
Reconstructing Training Data from Model Gradient, Provably
Reconstructing Training Data from Model Gradient, Provably
Zihan Wang
Jason D. Lee
Qi Lei
FedML
32
24
0
07 Dec 2022
Analysing Training-Data Leakage from Gradients through Linear Systems
  and Gradient Matching
Analysing Training-Data Leakage from Gradients through Linear Systems and Gradient Matching
Cangxiong Chen
Neill D. F. Campbell
FedML
34
1
0
20 Oct 2022
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
Rui Zhang
Song Guo
Junxiao Wang
Xin Xie
Dacheng Tao
35
36
0
15 Jun 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive
  Survey
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Understanding Training-Data Leakage from Gradients in Neural Networks
  for Image Classification
Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification
Cangxiong Chen
Neill D. F. Campbell
FedML
17
24
0
19 Nov 2021
Robbing the Fed: Directly Obtaining Private Data in Federated Learning
  with Modified Models
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
38
145
0
25 Oct 2021
R-GAP: Recursive Gradient Attack on Privacy
R-GAP: Recursive Gradient Attack on Privacy
Junyi Zhu
Matthew Blaschko
FedML
6
132
0
15 Oct 2020
1