148
102

Non-asymptotic theory for the Plug-in Rule in Functional Estimation

Abstract

The plug-in rule is widely used in estimating functionals of finite dimensional parameters, and involves plugging in an asymptotically efficient estimator for the parameter to obtain an asymptotically efficient estimator for the functional. We propose a general non-asymptotic theory for analyzing the performance of the plug-in rule, and demonstrate its significance via estimating functionals of discrete distributions using the maximum likelihood estimator (MLE). We show that existing theory is insufficient for analyzing the bias of the plug-in rule, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. We highlight our techniques by obtaining tight L2L_2 risk bounds on estimation the Shannon entropy H(P)=i=1SpilnpiH(P) = \sum_{i = 1}^S -p_i \ln p_i, and Fα(P)=i=1SpiαF_\alpha(P) = \sum_{i = 1}^S p_i^\alpha. For Shannon entropy estimation, we show that it is necessary and sufficient to have n=ω(S)n = \omega(S) observations for the MLE to be consistent, where SS represents the alphabet size. In addition, we obtain that it is necessary and sufficient to consider n=ω(S1/α)n = \omega(S^{1/\alpha}) samples for the MLE to consistently estimate Fα(P),0<α<1F_\alpha(P), 0<\alpha<1. For both these problems, the MLE achieve the best possible sample complexity up to logarithmic factors, but are strictly sub-optimal. When α>1\alpha>1, we show that the exact L2L_2 rates of convergence for the MLE is O(max{n2(α1),n1})O(\max\{n^{-2(\alpha-1)},n^{-1}\}) regardless of the alphabet size.

View on arXiv
Comments on this paper