ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06021
46
0

FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning

8 March 2025
Mingcong Xu
Xiaojin Zhang
Wei Chen
Hai Jin
    FedML
ArXivPDFHTML
Abstract

Federated Learning (FL) enables collaborative training of models across distributed clients without sharing local data, addressing privacy concerns in decentralized systems. However, the gradient-sharing process exposes private data to potential leakage, compromising FL's privacy guarantees in real-world applications. To address this issue, we propose Federated Error Minimization (FedEM), a novel algorithm that incorporates controlled perturbations through adaptive noise injection. This mechanism effectively mitigates gradient leakage attacks while maintaining model performance. Experimental results on benchmark datasets demonstrate that FedEM significantly reduces privacy risks and preserves model accuracy, achieving a robust balance between privacy protection and utility preservation.

View on arXiv
@article{xu2025_2503.06021,
  title={ FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning },
  author={ Mingcong Xu and Xiaojin Zhang and Wei Chen and Hai Jin },
  journal={arXiv preprint arXiv:2503.06021},
  year={ 2025 }
}
Comments on this paper