178
v1v2 (latest)

Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models

International Conference on Learning Representations (ICLR), 2022
Main:8 Pages
10 Figures
Bibliography:6 Pages
1 Tables
Appendix:11 Pages
Abstract

Studies on benign overfitting provide insights for the success of overparameterized deep learning models. In this work, we examine whether overfitting is truly benign in real-world classification tasks. We start with the observation that a ResNet model overfits benignly on Cifar10 but not benignly on ImageNet. To understand why benign overfitting fails in the ImageNet experiment, we theoretically analyze benign overfitting under a more restrictive setup where the number of parameters is not significantly larger than the number of data points. Under this mild overparameterization setup, our analysis identifies a phase change: unlike in the previous heavy overparameterization settings, benign overfitting can now fail in the presence of label noise. Our analysis explains our empirical observations, and is validated by a set of control experiments with ResNets. Our work highlights the importance of understanding implicit bias in underfitting regimes as a future direction.

View on arXiv
Comments on this paper