ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04948
6
11

Mitigating Leakage in Federated Learning with Trusted Hardware

10 November 2020
Javad Ghareh Chamani
D. Papadopoulos
    FedML
ArXivPDFHTML
Abstract

In federated learning, multiple parties collaborate in order to train a global model over their respective datasets. Even though cryptographic primitives (e.g., homomorphic encryption) can help achieve data privacy in this setting, some partial information may still be leaked across parties if this is done non-judiciously. In this work, we study the federated learning framework of SecureBoost [Cheng et al., FL@IJCAI'19] as a specific such example, demonstrate a leakage-abuse attack based on its leakage profile, and experimentally evaluate the effectiveness of our attack. We then propose two secure versions relying on trusted execution environments. We implement and benchmark our protocols to demonstrate that they are 1.2-5.4X faster in computation and need 5-49X less communication than SecureBoost.

View on arXiv
Comments on this paper