ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13482
82
0

Smoothed Normalization for Efficient Distributed Private Optimization

20 February 2025
Egor Shulgin
Sarit Khirirat
Peter Richtárik
    FedML
ArXivPDFHTML
Abstract

Federated learning enables training machine learning models while preserving the privacy of participants. Surprisingly, there is no differentially private distributed method for smooth, non-convex optimization problems. The reason is that standard privacy techniques require bounding the participants' contributions, usually enforced via clipping\textit{clipping}clipping of the updates. Existing literature typically ignores the effect of clipping by assuming the boundedness of gradient norms or analyzes distributed algorithms with clipping but ignores DP constraints. In this work, we study an alternative approach via smoothed normalization\textit{smoothed normalization}smoothed normalization of the updates motivated by its favorable performance in the single-node setting. By integrating smoothed normalization with an error-feedback mechanism, we design a new distributed algorithm α\alphaα-NormEC\sf NormECNormEC. We prove that our method achieves a superior convergence rate over prior works. By extending α\alphaα-NormEC\sf NormECNormEC to the DP setting, we obtain the first differentially private distributed optimization algorithm with provable convergence guarantees. Finally, our empirical results from neural network training indicate robust convergence of α\alphaα-NormEC\sf NormECNormEC across different parameter settings.

View on arXiv
@article{shulgin2025_2502.13482,
  title={ Smoothed Normalization for Efficient Distributed Private Optimization },
  author={ Egor Shulgin and Sarit Khirirat and Peter Richtárik },
  journal={arXiv preprint arXiv:2502.13482},
  year={ 2025 }
}
Comments on this paper