ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1406.6959
148
102
v1v2v3v4v5v6v7 (latest)

Non-asymptotic Theory for the Plug-in Rule in Functional Estimation

26 June 2014
Jiantao Jiao
K. Venkat
Yanjun Han
Tsachy Weissman
ArXiv (abs)PDFHTML
Abstract

The plug-in rule is widely used in estimating functionals of finite dimensional parameters, and involves plugging in an asymptotically efficient estimator for the parameter to obtain an asymptotically efficient estimator for the functional. We propose a general non-asymptotic theory for analyzing the performance of the plug-in rule, and demonstrate its utility by applying it to estimation of functionals of discrete distributions via the maximum likelihood estimator (MLE). We show that existing theory is insufficient for analyzing the bias of the plug-in rule, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques completely characterize the maximum L2L_2L2​ risk incurred by the MLE in estimating the Shannon entropy H(P)=∑i=1S−piln⁡piH(P) = \sum_{i = 1}^S -p_i \ln p_iH(P)=∑i=1S​−pi​lnpi​, and Fα(P)=∑i=1SpiαF_\alpha(P) = \sum_{i = 1}^S p_i^\alphaFα​(P)=∑i=1S​piα​ up to a constant. As corollaries, for Shannon entropy estimation, we show that it is necessary and sufficient to have n=ω(S)n = \omega(S)n=ω(S) observations for the MLE to be consistent, where SSS represents the alphabet size. In addition, we obtain that it is necessary and sufficient to consider n=ω(S1/α)n = \omega(S^{1/\alpha})n=ω(S1/α) samples for the MLE to consistently estimate Fα(P),0<α<1F_\alpha(P), 0<\alpha<1Fα​(P),0<α<1. The minimax sample complexity for both problems are ω(S/ln⁡S)\omega(S/\ln S)ω(S/lnS) and ω(S1/α/ln⁡S)\omega(S^{1/\alpha}/\ln S)ω(S1/α/lnS), which implies that the MLE is strictly sub-optimal. When 1<α<3/21<\alpha<3/21<α<3/2, we show that the maximum L2L_2L2​ rate of convergence for the MLE is n−2(α−1)n^{-2(\alpha-1)}n−2(α−1) for infinite alphabet size, while the minimax L2L_2L2​ rate is (nln⁡n)−2(α−1)(n\ln n)^{-2(\alpha-1)}(nlnn)−2(α−1). When α≥3/2\alpha\geq 3/2α≥3/2, the MLE achieves the minimax optimal L2L_2L2​ convergence rate n−1n^{-1}n−1 regardless of the alphabet size.

View on arXiv
Comments on this paper