ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07238
95
7
v1v2v3v4v5 (latest)

Double-descent curves in neural networks: a new perspective using Gaussian processes

14 February 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
ArXiv (abs)PDFHTML
Abstract

Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. Here we use a neural network Gaussian process (NNGP) which maps exactly to a fully connected network (FCN) in the infinite-width limit, combined with techniques from random matrix theory, to calculate this generalisation behaviour. An advantage of our NNGP approach is that the analytical calculations are easier to interpret. We argue that the fact that the generalisation error of neural networks decreases in the overparameterized regime and has a finite theoretical value is explained by the convergence to their limiting Gaussian processes. Our analysis thus provides a mathematical explanation for a surprising phenomenon that could not explained by conventional statistical learning theory. However, understanding what drives these finite theoretical values to be the state-of-the-art generalisation performances in many applications remains an open question, for which we only provide new leads in this paper.

View on arXiv
Comments on this paper