Maximum Likelihood Estimation of Functionals of Discrete Distributions

We propose a general framework for analyzing the performance of the MLE (Maximum Likelihood Estimator) in estimating functionals of discrete distributions, under the worst-case mean squared error criterion. We show that existing theory, which was developed to accommodate a fixed alphabet and a growing number of observations, is insufficient for analyzing the bias of the MLE, and apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques yield a characterization of the worst-case risk incurred by the MLE in estimating the Shannon entropy , and up to a multiplicative constant, for any alphabet size and sample size . We show that it is necessary and sufficient to have observations for the MLE to be consistent in Shannon entropy estimation. The MLE requires samples to consistently estimate . The minimax rate-optimal estimators for both problems require and samples, which implies that the MLE has a strictly sub-optimal sample complexity. When , we show that the worst-case rate of convergence for the MLE is for infinite alphabet size, while the minimax rate is . When , the MLE achieves the optimal convergence rate regardless of the alphabet size. We explicitly establish an equivalence between bias analysis of plug-in estimators for general functionals under arbitrary statistical models, and the theory of approximation using positive linear operators. This equivalence is of relevance and consequence far beyond the specific problem setting in this paper.
View on arXiv