ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19969
26
0

Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models

26 May 2025
A. Koskela
Tejas D. Kulkarni
ArXiv (abs)PDFHTML
Main:12 Pages
3 Figures
Bibliography:4 Pages
Appendix:8 Pages
Abstract

Fully decentralized training of machine learning models offers significant advantages in scalability, robustness, and fault tolerance. However, achieving differential privacy (DP) in such settings is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. In this work, we present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise, both with and without secure summation over each node's direct neighbors. Our main contribution is a new analytical framework based on a linear systems formulation that accurately characterizes privacy leakage across these scenarios. This framework significantly improves upon prior analyses, for example, reducing the Rényi DP parameter growth from O(T2)O(T^2)O(T2) to O(T)O(T)O(T), where TTT is the number of training rounds. We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches. We further illustrate our analysis with a logistic regression experiment on MNIST image classification in a fully decentralized setting, demonstrating utility comparable to central aggregation methods.

View on arXiv
@article{koskela2025_2505.19969,
  title={ Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models },
  author={ Antti Koskela and Tejas Kulkarni },
  journal={arXiv preprint arXiv:2505.19969},
  year={ 2025 }
}
Comments on this paper