Generalization Bounds for Uniformly Stable Algorithms

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in , the generalization error of a -uniformly stable learning algorithm on samples is known to be within of the empirical error with probability at least . Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where . At the same time the bound is known to be tight only when . We substantially improve generalization bounds for uniformly stable algorithms without making any additional assumptions. First, we show that the bound in this setting is with probability at least . In addition, we prove a tight bound of on the second moment of the estimation error. The best previous bound on the second moment is . Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
View on arXiv