Typicality-Based Stability and Privacy
In this paper, we introduce a new notion of algorithmic stability called typical stability. When our goal is to release real-valued queries (statistics) computed over a dataset, this notion does not require the queries to be of bounded sensitivity -- a condition that is generally assumed under a standard notion of algorithmic stability known as differential privacy [DMNS06, Dwo06]. Instead, typical stability requires the output of the query, when computed on a dataset drawn from the underlying distribution, to be "well-concentrated" around its expected value with respect to that distribution. Typical stability can also be motivated as an alternative definition for database privacy (in such case, we call it typical privacy). Like differential privacy, this notion enjoys several important properties including robustness to post-processing and adaptive composition. We also discuss the guarantees of typical stability on the generalization error for a broader class of queries than that of bounded-sensitivity queries. This class contains all queries whose output distributions have a "light" tail, e.g., subgaussian and subexponential queries. In particular, we show that if a typically stable interaction with a dataset yields a query from that class, then this query when evaluated on the same dataset will have small generalization error with high probability (i.e., it will not overfit to the dataset). We discuss the composition guarantees of typical stability and prove a composition theorem that gives a characterization of the degradation of the parameters of typical stability/privacy under -fold adaptive composition. We also give simple noise-addition algorithms that achieve this notion. These algorithms are similar to their differentially private counterparts, however, the added noise is calibrated differently.
View on arXiv