ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.02970
38
30

The Price of Selection in Differential Privacy

9 February 2017
Mitali Bafna
Jonathan R. Ullman
ArXivPDFHTML
Abstract

In the differentially private top-kkk selection problem, we are given a dataset X∈{±1}n×dX \in \{\pm 1\}^{n \times d}X∈{±1}n×d, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of k≪dk \ll dk≪d columns whose means are approximately as large as possible. Differential privacy requires that our choice of these kkk columns does not depend too much on any on individual's dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error α≈ln⁡(d)/n\alpha \approx \sqrt{\ln(d)/n}α≈ln(d)/n​, this procedure succeeds given a dataset of size n≳kln⁡(d)n \gtrsim k \ln(d)n≳kln(d). We prove a matching lower bound, showing that a dataset of size n≳kln⁡(d)n \gtrsim k \ln(d)n≳kln(d) is necessary for private top-kkk selection in this high-accuracy regime. Our lower bound is the first to show that selecting the kkk largest columns requires more data than simply estimating the value of those kkk columns, which can be done using a dataset of size just n≳kn \gtrsim kn≳k.

View on arXiv
Comments on this paper