ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.01155
21
12

Byzantine-Robust Federated Linear Bandits

3 April 2022
Ali Jadbabaie
Haochuan Li
Jian Qian
Yi Tian
    FedML
ArXivPDFHTML
Abstract

In this paper, we study a linear bandit optimization problem in a federated setting where a large collection of distributed agents collaboratively learn a common linear bandit model. Standard federated learning algorithms applied to this setting are vulnerable to Byzantine attacks on even a small fraction of agents. We propose a novel algorithm with a robust aggregation oracle that utilizes the geometric median. We prove that our proposed algorithm is robust to Byzantine attacks on fewer than half of agents and achieves a sublinear O~(T3/4)\tilde{\mathcal{O}}({T^{3/4}})O~(T3/4) regret with O(T)\mathcal{O}(\sqrt{T})O(T​) steps of communication in TTT steps. Moreover, we make our algorithm differentially private via a tree-based mechanism. Finally, if the level of corruption is known to be small, we show that using the geometric median of mean oracle for robust aggregation further improves the regret bound.

View on arXiv
Comments on this paper