23
2

The Sample Complexity of Smooth Boosting and the Tightness of the Hardcore Theorem

Abstract

Smooth boosters generate distributions that do not place too much weight on any given example. Originally introduced for their noise-tolerant properties, such boosters have also found applications in differential privacy, reproducibility, and quantum learning theory. We study and settle the sample complexity of smooth boosting: we exhibit a class that can be weak learned to γ\gamma-advantage over smooth distributions with mm samples, for which strong learning over the uniform distribution requires Ω~(1/γ2)m\tilde{\Omega}(1/\gamma^2)\cdot m samples. This matches the overhead of existing smooth boosters and provides the first separation from the setting of distribution-independent boosting, for which the corresponding overhead is O(1/γ)O(1/\gamma). Our work also sheds new light on Impagliazzo's hardcore theorem from complexity theory, all known proofs of which can be cast in the framework of smooth boosting. For a function ff that is mildly hard against size-ss circuits, the hardcore theorem provides a set of inputs on which ff is extremely hard against size-ss' circuits. A downside of this important result is the loss in circuit size, i.e. that sss' \ll s. Answering a question of Trevisan, we show that this size loss is necessary and in fact, the parameters achieved by known proofs are the best possible.

View on arXiv
Comments on this paper