163
102

Maximum Likelihood Estimation of Functionals of Discrete Distributions

Abstract

We propose a general framework for analyzing the performance of the MLE (Maximum Likelihood Estimator) in estimating functionals of discrete distributions, under the worst-case mean squared error criterion. We show that existing theory, which was developed to accommodate a fixed alphabet and a growing number of observations, is insufficient for analyzing the bias of the MLE, and apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques yield a characterization of the worst-case L2L_2 risk incurred by the MLE in estimating the Shannon entropy H(P)=i=1SpilnpiH(P) = \sum_{i = 1}^S -p_i \ln p_i, and Fα(P)=i=1Spiα,α>0F_\alpha(P) = \sum_{i = 1}^S p_i^\alpha,\alpha>0 up to a multiplicative constant, for any alphabet size SS\leq \infty and sample size nn. We show that it is necessary and sufficient to have nSn \gg S observations for the MLE to be consistent in Shannon entropy estimation. The MLE requires nS1/αn \gg S^{1/\alpha} samples to consistently estimate Fα(P),0<α<1F_\alpha(P), 0<\alpha<1. The minimax rate-optimal estimators for both problems require S/lnSS/\ln S and S1/α/lnSS^{1/\alpha}/\ln S samples, which implies that the MLE has a strictly sub-optimal sample complexity. When 1<α<3/21<\alpha<3/2, we show that the worst-case L2L_2 rate of convergence for the MLE is n2(α1)n^{-2(\alpha-1)} for infinite alphabet size, while the minimax L2L_2 rate is (nlnn)2(α1)(n\ln n)^{-2(\alpha-1)}. When α3/2\alpha\geq 3/2, the MLE achieves the optimal L2L_2 convergence rate n1n^{-1} regardless of the alphabet size. We explicitly establish an equivalence between bias analysis of plug-in estimators for general functionals under arbitrary statistical models, and the theory of approximation using positive linear operators. This equivalence is of relevance and consequence far beyond the specific problem setting in this paper.

View on arXiv
Comments on this paper