Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.09843
Cited By
An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning
23 February 2020
Xue Yang
Yan Feng
Weijun Fang
Jun Shao
Xiaohu Tang
Shutao Xia
Rongxing Lu
FedML
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning"
7 / 7 papers shown
Title
Gradients Stand-in for Defending Deep Leakage in Federated Learning
H. Yi
H. Ren
C. Hu
Y. Li
J. Deng
Xin Xie
FedML
32
0
0
11 Oct 2024
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective
Xianghua Xie
Chen Hu
Hanchi Ren
Jingjing Deng
FedML
AAML
47
19
0
27 Nov 2023
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
43
23
0
20 Jul 2023
Gradient Leakage Defense with Key-Lock Module for Federated Learning
Hanchi Ren
Jingjing Deng
Xianghua Xie
Xiaoke Ma
Jianfeng Ma
FedML
37
2
0
06 May 2023
Vertical Federated Knowledge Transfer via Representation Distillation for Healthcare Collaboration Networks
Chung-ju Huang
Leye Wang
Xiao Han
FedML
32
24
0
11 Feb 2023
Does Federated Learning Really Need Backpropagation?
H. Feng
Tianyu Pang
Chao Du
Wei Chen
Shuicheng Yan
Min-Bin Lin
FedML
36
10
0
28 Jan 2023
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations
Jialing Liao
Zheng Chen
Erik G. Larsson
25
12
0
05 Oct 2022
1