ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.01154
6
1

Towards Quantum-Safe Federated Learning via Homomorphic Encryption: Learning with Gradients

2 February 2024
Guangfeng Yan
Shanxiang Lyu
Hanxu Hou
Zhiyong Zheng
Linqi Song
    FedML
ArXivPDFHTML
Abstract

This paper introduces a privacy-preserving distributed learning framework via private-key homomorphic encryption. Thanks to the randomness of the quantization of gradients, our learning with error (LWE) based encryption can eliminate the error terms, thus avoiding the issue of error expansion in conventional LWE-based homomorphic encryption. The proposed system allows a large number of learning participants to engage in neural network-based deep learning collaboratively over an honest-but-curious server, while ensuring the cryptographic security of participants' uploaded gradients.

View on arXiv
Comments on this paper