Learning Coverage Functions and Private Release of Marginals

We study the problem of approximating and learning coverage functions. A function is a coverage function, if there exists a universe with non-negative weights for each and subsets of such that . Alternatively, coverage functions can be described as non-negative linear combinations of monotone disjunctions. They are a natural subclass of submodular functions and arise in a number of applications. We give an algorithm that for any , given random and uniform examples of an unknown coverage function , finds a function that approximates within factor on all but -fraction of the points in time . This is the first fully-polynomial algorithm for learning an interesting class of functions in the demanding PMAC model of Balcan and Harvey (2011). Our algorithms are based on several new structural properties of coverage functions. Using the results in (Feldman and Kothari, 2014), we also show that coverage functions are learnable agnostically with excess -error over all product and symmetric distributions in time . In contrast, we show that, without assumptions on the distribution, learning coverage functions is at least as hard as learning polynomial-size disjoint DNF formulas, a class of functions for which the best known algorithm runs in time (Klivans and Servedio, 2004). As an application of our learning results, we give simple differentially-private algorithms for releasing monotone conjunction counting queries with low average error. In particular, for any , we obtain private release of -way marginals with average error in time .
View on arXiv