ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12535
28
2

Concurrent Shuffle Differential Privacy Under Continual Observation

29 January 2023
J. Tenenbaum
Haim Kaplan
Yishay Mansour
Uri Stemmer
    FedML
ArXivPDFHTML
Abstract

We introduce the concurrent shuffle model of differential privacy. In this model we have multiple concurrent shufflers permuting messages from different, possibly overlapping, batches of users. Similarly to the standard (single) shuffle model, the privacy requirement is that the concatenation of all shuffled messages should be differentially private. We study the private continual summation problem (a.k.a. the counter problem) and show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model. Specifically, we give a summation algorithm with error O~(n1/(2k+1))\tilde{O}(n^{1/(2k+1)})O~(n1/(2k+1)) with kkk concurrent shufflers on a sequence of length nnn. Furthermore, we prove that this bound is tight for any kkk, even if the algorithm can choose the sizes of the batches adaptively. For k=log⁡nk=\log nk=logn shufflers, the resulting error is polylogarithmic, much better than Θ~(n1/3)\tilde{\Theta}(n^{1/3})Θ~(n1/3) which we show is the smallest possible with a single shuffler. We use our online summation algorithm to get algorithms with improved regret bounds for the contextual linear bandit problem. In particular we get optimal O~(n)\tilde{O}(\sqrt{n})O~(n​) regret with k=Ω~(log⁡n)k= \tilde{\Omega}(\log n)k=Ω~(logn) concurrent shufflers.

View on arXiv
Comments on this paper