ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.12727
  4. Cited By
EIFFeL: Ensuring Integrity for Federated Learning

EIFFeL: Ensuring Integrity for Federated Learning

23 December 2021
A. Chowdhury
Chuan Guo
S. Jha
Laurens van der Maaten
    FedML
ArXivPDFHTML

Papers citing "EIFFeL: Ensuring Integrity for Federated Learning"

15 / 15 papers shown
Title
Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security
Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security
Yiwei Zhang
R. Behnia
A. Yavuz
Reza Ebrahimi
E. Bertino
FedML
61
0
0
09 May 2025
SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
Zhihui Zhao
Xiaorong Dong
Yimo Ren
Jianhua Wang
Dan Yu
Hongsong Zhu
Yongle Chen
103
0
0
24 Feb 2025
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
Runhua Xu
Bo Li
Chao Li
J. Joshi
Shuai Ma
Jianxin Li
FedML
66
10
0
10 Jan 2025
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
Sizai Hou
Songze Li
Tayyebeh Jahani-Nezhad
Giuseppe Caire
FedML
74
2
0
12 Jul 2024
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
  Sparsification
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
24
85
0
12 Dec 2021
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Chulin Xie
Minghao Chen
Pin-Yu Chen
Yue Liu
FedML
59
166
0
15 Jun 2021
Byzantine-Resilient Secure Federated Learning
Byzantine-Resilient Secure Federated Learning
Jinhyun So
Başak Güler
A. Avestimehr
FedML
34
239
0
21 Jul 2020
Learning to Detect Malicious Clients for Robust Federated Learning
Learning to Detect Malicious Clients for Robust Federated Learning
Suyi Li
Yong Cheng
Wei Wang
Yang Liu
Tianjian Chen
AAML
FedML
91
224
0
01 Feb 2020
The power of synergy in differential privacy: Combining a small curator
  with local randomizers
The power of synergy in differential privacy: Combining a small curator with local randomizers
A. Beimel
Aleksandra Korolova
Kobbi Nissim
Or Sheffet
Uri Stemmer
41
14
0
18 Dec 2019
Can You Really Backdoor Federated Learning?
Can You Really Backdoor Federated Learning?
Ziteng Sun
Peter Kairouz
A. Suresh
H. B. McMahan
FedML
50
565
0
18 Nov 2019
Towards Federated Learning at Scale: System Design
Towards Federated Learning at Scale: System Design
Keith Bonawitz
Hubert Eichner
W. Grieskamp
Dzmitry Huba
A. Ingerman
...
H. B. McMahan
Timon Van Overveldt
David Petrou
Daniel Ramage
Jason Roselander
FedML
70
2,652
0
04 Feb 2019
Exploiting Unintended Feature Leakage in Collaborative Learning
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
113
1,461
0
10 May 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
64
1,822
0
15 Dec 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
253
4,620
0
18 Oct 2016
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
60
1,580
0
27 Jun 2012
1