ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07279
25
7

Global Convergence of Federated Learning for Mixed Regression

15 June 2022
Lili Su
Jiaming Xu
Pengkun Yang
    FedML
ArXivPDFHTML
Abstract

This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure. We contextualize this problem in mixed regression, where each client has limited local data generated from one of kkk unknown regression models. We design an algorithm that achieves global convergence from any initialization, and works even when local data volume is highly unbalanced -- there could exist clients that contain O(1)O(1)O(1) data points only. Our algorithm first runs moment descent on a few anchor clients (each with Ω~(k)\tilde{\Omega}(k)Ω~(k) data points) to obtain coarse model estimates. Then each client alternately estimates its cluster labels and refines the model estimates based on FedAvg or FedProx. A key innovation in our analysis is a uniform estimate on the clustering errors, which we prove by bounding the VC dimension of general polynomial concept classes based on the theory of algebraic geometry.

View on arXiv
Comments on this paper