ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21491
48
1

Trustworthiness of Stochastic Gradient Descent in Distributed Learning

28 October 2024
Hongyang Li
Caesar Wu
Mohammed Chadli
Said Mammar
Pascal Bouvry
ArXivPDFHTML
Abstract

Distributed learning (DL) uses multiple nodes to accelerate training, enabling efficient optimization of large-scale models. Stochastic Gradient Descent (SGD), a key optimization algorithm, plays a central role in this process. However, communication bottlenecks often limit scalability and efficiency, leading to increasing adoption of compressed SGD techniques to alleviate these challenges. Despite addressing communication overheads, compressed SGD introduces trustworthiness concerns, as gradient exchanges among nodes are vulnerable to attacks like gradient inversion (GradInv) and membership inference attacks (MIA). The trustworthiness of compressed SGD remains unexplored, leaving important questions about its reliability unanswered.In this paper, we provide a trustworthiness evaluation of compressed versus uncompressed SGD. Specifically, we conducted empirical studies using GradInv attacks, revealing that compressed SGD demonstrates significantly higher resistance to privacy leakage compared to uncompressed SGD. In addition, our findings suggest that MIA may not be a reliable metric for assessing privacy risks in distributed learning.

View on arXiv
@article{li2025_2410.21491,
  title={ Trustworthiness of Stochastic Gradient Descent in Distributed Learning },
  author={ Hongyang Li and Caesar Wu and Mohammed Chadli and Said Mammar and Pascal Bouvry },
  journal={arXiv preprint arXiv:2410.21491},
  year={ 2025 }
}
Comments on this paper