Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. Here we use a neural network Gaussian process (NNGP) which maps exactly to a fully connected network (FCN) in the infinite-width limit, combined with techniques from random matrix theory, to calculate this generalisation behaviour. An advantage of our NNGP approach is that the analytical calculations are easier to interpret. We argue that the fact that the generalisation error of neural networks decreases in the overparameterized regime and has a finite theoretical value is explained by the convergence to their limiting Gaussian processes. Our analysis thus provides a mathematical explanation for a surprising phenomenon that could not explained by conventional statistical learning theory. However, understanding what drives these finite theoretical values to be the state-of-the-art generalisation performances in many applications remains an open question, for which we only provide new leads in this paper.
View on arXiv