ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.02078
6
3

Finite sample improvement of Akaike's Information Criterion

6 March 2018
Adrien Saumard
F. Navarro
ArXivPDFHTML
Abstract

We emphasize that it is possible to improve the principle of unbiased risk estimation for model selection by addressing excess risk deviations in the design of penalization procedures. Indeed, we propose a modification of Akaike's Information Criterion that avoids overfitting, even when the sample size is small. We call this correction an over-penalization procedure. As proof of concept, we show the nonasymptotic optimality of our histogram selection procedure in density estimation by establishing sharp oracle inequalities for the Kullback-Leibler divergence. One of the main features of our theoretical results is that they include the estimation of unbounded logdensities. To do so, we prove several analytical and probabilistic lemmas that are of independent interest. In an experimental study, we also demonstrate state-of-the-art performance of our over-penalization criterion for bin size selection, in particular outperforming AICc procedure.

View on arXiv
Comments on this paper