ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.04301
14
0

Rethinking Generalisation

11 November 2019
Antonia Marcu
Adam Prugel-Bennett
ArXivPDFHTML
Abstract

In this paper, a new approach to computing the generalisation performance is presented that assumes the distribution of risks, ρ(r)\rho(r)ρ(r), for a learning scenario is known. From this, the expected error of a learning machine using empirical risk minimisation is computed for both classification and regression problems. A critical quantity in determining the generalisation performance is the power-law behaviour of ρ(r)\rho(r)ρ(r) around its minimum value---a quantity we call attunement. The distribution ρ(r)\rho(r)ρ(r) is computed for the case of all Boolean functions and for the perceptron used in two different problem settings. Initially a simplified analysis is presented where an independence assumption about the losses is made. A more accurate analysis is carried out taking into account chance correlations in the training set. This leads to corrections in the typical behaviour that is observed.

View on arXiv
Comments on this paper