ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.00428
  4. Cited By
FLVoogd: Robust And Privacy Preserving Federated Learning

FLVoogd: Robust And Privacy Preserving Federated Learning

24 June 2022
Yuhang Tian
Rui Wang
Yan Qiao
E. Panaousis
K. Liang
    FedML
ArXivPDFHTML

Papers citing "FLVoogd: Robust And Privacy Preserving Federated Learning"

11 / 11 papers shown
Title
CrypTen: Secure Multi-Party Computation Meets Machine Learning
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Brian Knott
Shobha Venkataraman
Awni Y. Hannun
Shubho Sengupta
Mark Ibrahim
Laurens van der Maaten
69
354
0
02 Sep 2021
Federated Learning Meets Natural Language Processing: A Survey
Federated Learning Meets Natural Language Processing: A Survey
Ming Liu
Stella Ho
Mengqi Wang
Longxiang Gao
Yuan Jin
Heng Zhang
FedML
39
68
0
27 Jul 2021
FLAME: Taming Backdoors in Federated Learning (Extended Version 1)
FLAME: Taming Backdoors in Federated Learning (Extended Version 1)
T. D. Nguyen
Phillip Rieger
Huili Chen
Hossein Yalame
Helen Mollering
...
Azalia Mirhoseini
S. Zeitouni
F. Koushanfar
A. Sadeghi
T. Schneider
AAML
56
26
0
06 Jan 2021
FedML: A Research Library and Benchmark for Federated Machine Learning
FedML: A Research Library and Benchmark for Federated Machine Learning
Chaoyang He
Songze Li
Jinhyun So
Xiao Zeng
Mi Zhang
...
Yang Liu
Ramesh Raskar
Qiang Yang
M. Annavaram
Salman Avestimehr
FedML
221
576
0
27 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
68
603
0
09 Jul 2020
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep
  Learning
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Sameer Wagh
Shruti Tople
Fabrice Benhamouda
E. Kushilevitz
Prateek Mittal
T. Rabin
FedML
52
301
0
05 Apr 2020
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
101
1,103
0
26 Nov 2019
Subsampled Rényi Differential Privacy and Analytical Moments
  Accountant
Subsampled Rényi Differential Privacy and Analytical Moments Accountant
Yu Wang
Borja Balle
S. Kasiviswanathan
71
398
0
31 Jul 2018
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Dong Yin
Yudong Chen
Kannan Ramchandran
Peter L. Bartlett
OOD
FedML
113
1,492
0
05 Mar 2018
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
96
630
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
96
1,770
0
22 Aug 2017
1