Revisiting Agnostic PAC Learning

PAC learning, dating back to Valiant'84 and Vapnik and Chervonenkis'64,'74, is a classic model for studying supervised learning. In the agnostic setting, we have access to a hypothesis set and a training set of labeled samples drawn i.i.d. from an unknown distribution . The goal is to produce a classifier that is competitive with the hypothesis having the least probability of mispredicting the label of a new sample . Empirical Risk Minimization (ERM) is a natural learning algorithm, where one simply outputs the hypothesis from making the fewest mistakes on the training data. This simple algorithm is known to have an optimal error in terms of the VC-dimension of and the number of samples . In this work, we revisit agnostic PAC learning and first show that ERM is in fact sub-optimal if we treat the performance of the best hypothesis, denoted , as a parameter. Concretely we show that ERM, and any other proper learning algorithm, is sub-optimal by a factor. We then complement this lower bound with the first learning algorithm achieving an optimal error for nearly the full range of . Our algorithm introduces several new ideas that we hope may find further applications in learning theory.
View on arXiv