ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.08752
40
23
v1v2 (latest)

Byzantine Fault Tolerant Distributed Linear Regression

20 March 2019
Nirupam Gupta
Nitin H. Vaidya
ArXiv (abs)PDFHTML
Abstract

This paper considers the problem of Byzantine fault tolerant distributed linear regression. The system comprises of a server and nnn number of agents, where each agent iii is holding some data points and responses. Up to fff of the nnn agents in the system are Byzantine faulty and the identity of Byzantine faulty agents is apriori unknown to the server. The datasets and responses of honest agents are related linearly through a common parameter, which is to be determined by the server. This seemingly simple problem is challenging to solve due to the Byzantine (or adversarial) nature of faulty agents. We propose a simple norm filtering technique that "robustifies" the original distributed gradient descent algorithm to solve the aforementioned regression problem when f/nf/nf/n is less than a specified threshold value. The computational complexity of the proposed filtering technique is O(n(d+log⁡n))O(n (d + \log n))O(n(d+logn)) and the resultant algorithm is shown to be partially asynchronous. Unlike existing algorithms for Byzantine fault tolerance in distributed statistical learning, the proposed algorithm does not rely on assumptions on the probability distribution of agents' data points. The proposed algorithm also solves a more general distributed multi-agent optimization problem under Byzantine faults.

View on arXiv
Comments on this paper