ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10376
  4. Cited By
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks
  under Federated Learning, A Survey and Taxonomy

Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy

16 May 2024
Yichuan Shi
Olivera Kotevska
Viktor Reshniak
Abhishek Singh
Ramesh Raskar
    AAML
ArXivPDFHTML

Papers citing "Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy"

18 / 18 papers shown
Title
Client-side Gradient Inversion Against Federated Learning from Poisoning
Client-side Gradient Inversion Against Federated Learning from Poisoning
Jiaheng Wei
Yanjun Zhang
Leo Yu Zhang
Chao Chen
Shirui Pan
Kok-Leong Ong
Jinchao Zhang
Yang Xiang
AAML
51
3
0
14 Sep 2023
An Experimental Study of Byzantine-Robust Aggregation Schemes in
  Federated Learning
An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
Shenghui Li
Edith C.H. Ngai
Thiemo Voigt
FedML
AAML
45
56
0
14 Feb 2023
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated
  Learning using Independent Component Analysis
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Sanjay Kariyappa
Chuan Guo
Kiwan Maeng
Wenjie Xiong
G. E. Suh
Moinuddin K. Qureshi
Hsien-Hsin S. Lee
FedML
74
29
0
12 Sep 2022
Recovering Private Text in Federated Learning of Language Models
Recovering Private Text in Federated Learning of Language Models
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
63
78
0
17 May 2022
Fishing for User Data in Large-Batch Federated Learning via Gradient
  Magnification
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
168
96
0
01 Feb 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
95
184
0
06 Dec 2021
Understanding Training-Data Leakage from Gradients in Neural Networks
  for Image Classification
Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification
Cangxiong Chen
Neill D. F. Campbell
FedML
44
24
0
19 Nov 2021
Gradient Disaggregation: Breaking Privacy in Federated Learning by
  Reconstructing the User Participant Matrix
Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix
Maximilian Lam
Gu-Yeon Wei
David Brooks
Vijay Janapa Reddi
Michael Mitzenmacher
FedML
75
64
0
10 Jun 2021
R-GAP: Recursive Gradient Attack on Privacy
R-GAP: Recursive Gradient Attack on Privacy
Junyi Zhu
Matthew Blaschko
FedML
54
136
0
15 Oct 2020
InstaHide: Instance-hiding Schemes for Private Distributed Learning
InstaHide: Instance-hiding Schemes for Private Distributed Learning
Yangsibo Huang
Zhao Song
Keqin Li
Sanjeev Arora
FedML
PICV
72
152
0
06 Oct 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
71
277
0
02 Jul 2020
Inverting Gradients -- How easy is it to break privacy in federated
  learning?
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
98
1,223
0
31 Mar 2020
A Theory of Usable Information Under Computational Constraints
A Theory of Usable Information Under Computational Constraints
Yilun Xu
Shengjia Zhao
Jiaming Song
Russell Stewart
Stefano Ermon
70
173
0
25 Feb 2020
Federated Learning with Differential Privacy: Algorithms and Performance
  Analysis
Federated Learning with Differential Privacy: Algorithms and Performance Analysis
Kang Wei
Jun Li
Ming Ding
Chuan Ma
Heng Yang
Farokhi Farhad
Shi Jin
Tony Q.S. Quek
H. Vincent Poor
FedML
110
1,612
0
01 Nov 2019
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
Eli Shechtman
Oliver Wang
EGVM
355
11,784
0
11 Jan 2018
Deep Models Under the GAN: Information Leakage from Collaborative Deep
  Learning
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
113
1,401
0
24 Feb 2017
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
392
17,453
0
17 Feb 2016
Federated Optimization:Distributed Optimization Beyond the Datacenter
Federated Optimization:Distributed Optimization Beyond the Datacenter
Jakub Konecný
H. B. McMahan
Daniel Ramage
FedML
113
737
0
11 Nov 2015
1