ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.09980
34
8

Continual Mean Estimation Under User-Level Privacy

20 December 2022
Anand George
Lekshmi Ramesh
A. V. Singh
Himanshu Tyagi
    FedML
ArXivPDFHTML
Abstract

We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP). At each time instant, a user contributes a sample, and the users can arrive in arbitrary order. Until now these requirements of continual release and user-level privacy were considered in isolation. But, in practice, both these requirements come together as the users often contribute data repeatedly and multiple queries are made. We provide an algorithm that outputs a mean estimate at every time instant ttt such that the overall release is user-level ε\varepsilonε-DP and has the following error guarantee: Denoting by MtM_tMt​ the maximum number of samples contributed by a user, as long as Ω~(1/ε)\tilde{\Omega}(1/\varepsilon)Ω~(1/ε) users have Mt/2M_t/2Mt​/2 samples each, the error at time ttt is O~(1/t+Mt/tε)\tilde{O}(1/\sqrt{t}+\sqrt{M}_t/t\varepsilon)O~(1/t​+M​t​/tε). This is a universal error guarantee which is valid for all arrival patterns of the users. Furthermore, it (almost) matches the existing lower bounds for the single-release setting at all time instants when users have contributed equal number of samples.

View on arXiv
Comments on this paper