Non-asymptotic Theory for the Plug-in Rule in Functional Estimation

The plug-in rule is widely used in estimating functionals of finite dimensional parameters, and involves plugging in an asymptotically efficient estimator for the parameter to obtain an asymptotically efficient estimator for the functional. We propose a general non-asymptotic theory for analyzing the performance of the plug-in rule, and demonstrate its utility by applying it to estimation of functionals of discrete distributions via the maximum likelihood estimator (MLE). We show that existing theory is insufficient for analyzing the bias of the plug-in rule, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques completely characterize the maximum risk incurred by the MLE in estimating the Shannon entropy , and up to a constant. As corollaries, for Shannon entropy estimation, we show that it is necessary and sufficient to have observations for the MLE to be consistent, where represents the alphabet size. In addition, we obtain that it is necessary and sufficient to consider samples for the MLE to consistently estimate . The minimax sample complexity for both problems are and , which implies that the MLE is strictly sub-optimal. When , we show that the maximum rate of convergence for the MLE is for infinite alphabet size, while the minimax rate is . When , the MLE achieves the minimax optimal convergence rate regardless of the alphabet size.
View on arXiv