ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.03336
14
16

Typical Stability

12 April 2016
Raef Bassily
Y. Freund
ArXivPDFHTML
Abstract

In this paper, we introduce a notion of algorithmic stability called typical stability. When our goal is to release real-valued queries (statistics) computed over a dataset, this notion does not require the queries to be of bounded sensitivity -- a condition that is generally assumed under differential privacy [DMNS06, Dwork06] when used as a notion of algorithmic stability [DFHPRR15a, DFHPRR15b, BNSSSU16] -- nor does it require the samples in the dataset to be independent -- a condition that is usually assumed when generalization-error guarantees are sought. Instead, typical stability requires the output of the query, when computed on a dataset drawn from the underlying distribution, to be concentrated around its expected value with respect to that distribution. We discuss the implications of typical stability on the generalization error (i.e., the difference between the value of the query computed on the dataset and the expected value of the query with respect to the true data distribution). We show that typical stability can control generalization error in adaptive data analysis even when the samples in the dataset are not necessarily independent and when queries to be computed are not necessarily of bounded-sensitivity as long as the results of the queries over the dataset (i.e., the computed statistics) follow a distribution with a "light" tail. Examples of such queries include, but not limited to, subgaussian and subexponential queries. We also discuss the composition guarantees of typical stability and prove composition theorems that characterize the degradation of the parameters of typical stability under kkk-fold adaptive composition. We also give simple noise-addition algorithms that achieve this notion. These algorithms are similar to their differentially private counterparts, however, the added noise is calibrated differently.

View on arXiv
Comments on this paper