ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.14111
45
1

A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent

19 July 2024
Shuche Wang
Vincent Y. F. Tan
    FedML
    OOD
ArXivPDFHTML
Abstract

Distributed gradient descent algorithms have come to the fore in modern machine learning, especially in parallelizing the handling of large datasets that are distributed across several workers. However, scant attention has been paid to analyzing the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions instead of random noise. In this paper, we formulate a novel problem in which adversarial corruptions are present in a distributed learning system. We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm. Extensive convergence analysis for (strongly) convex loss functions is provided for different choices of the stepsize. We carefully optimize the stepsize schedule to accelerate the convergence of the algorithm, while at the same time amortizing the effect of the corruption over time. Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.

View on arXiv
@article{wang2025_2407.14111,
  title={ A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent },
  author={ Shuche Wang and Vincent Y. F. Tan },
  journal={arXiv preprint arXiv:2407.14111},
  year={ 2025 }
}
Comments on this paper