Non-asymptotic theory for the Plug-in Rule in Functional Estimation

The plug-in rule is widely used in estimating functionals of finite dimensional parameters, and involves plugging in an asymptotically efficient estimator for the parameter to obtain an asymptotically efficient estimator for the functional. We propose a general non-asymptotic theory for analyzing the performance of the plug-in rule, and demonstrate its significance via estimating functionals of discrete distributions using the maximum likelihood estimator (MLE). We show that existing theory is insufficient for analyzing the bias of the plug-in rule, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. We highlight our techniques by obtaining tight risk bounds on estimation the Shannon entropy , and . For Shannon entropy estimation, we show that it is necessary and sufficient to have observations for the MLE to be consistent, where represents the alphabet size. In addition, we obtain that it is necessary and sufficient to consider samples for the MLE to consistently estimate . For both these problems, the MLE achieve the best possible sample complexity up to logarithmic factors, but are strictly sub-optimal. When , we show that the exact rates of convergence for the MLE is regardless of the alphabet size.
View on arXiv