ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1012.4188
66
46

Empirical estimation of entropy functionals with confidence

19 December 2010
K. Sricharan
Raviv Raich
Alfred Hero
ArXivPDFHTML
Abstract

This paper introduces a class of k-nearest neighbor (kkk-NN) estimators called bipartite plug-in (BPI) estimators for estimating integrals of non-linear functions of a probability density, such as Shannon entropy and R\ényi entropy. The density is assumed to be smooth, have bounded support, and be uniformly bounded from below on this set. Unlike previous kkk-NN estimators of non-linear density functionals, the proposed estimator uses data-splitting and boundary correction to achieve lower mean square error. Specifically, we assume that TTT i.i.d. samples Xi∈Rd{X}_i \in \mathbb{R}^dXi​∈Rd from the density are split into two pieces of cardinality MMM and NNN respectively, with MMM samples used for computing a k-nearest-neighbor density estimate and the remaining NNN samples used for empirical estimation of the integral of the density functional. By studying the statistical properties of k-NN balls, explicit rates for the bias and variance of the BPI estimator are derived in terms of the sample size, the dimension of the samples and the underlying probability distribution. Based on these results, it is possible to specify optimal choice of tuning parameters M/TM/TM/T, kkk for maximizing the rate of decrease of the mean square error (MSE). The resultant optimized BPI estimator converges faster and achieves lower mean squared error than previous kkk-NN entropy estimators. In addition, a central limit theorem is established for the BPI estimator that allows us to specify tight asymptotic confidence intervals.

View on arXiv
Comments on this paper