ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.13497
11
11

Subspace Recovery from Heterogeneous Data with Non-isotropic Noise

24 October 2022
John C. Duchi
Vitaly Feldman
Lunjia Hu
Kunal Talwar
    FedML
ArXivPDFHTML
Abstract

Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from nnn users with user iii contributing data samples from a ddd-dimensional distribution with mean μi\mu_iμi​. Our goal is to recover the linear subspace shared by μ1,…,μn\mu_1,\ldots,\mu_nμ1​,…,μn​ using the data points from all users, where every data point from user iii is formed by adding an independent mean-zero noise vector to μi\mu_iμi​. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup.

View on arXiv
Comments on this paper