11
5

Likelihood estimation of sparse topic distributions in topic models and its applications to Wasserstein document distance calculations

Abstract

This paper studies the estimation of high-dimensional, discrete, possibly sparse, mixture models in topic models. The data consists of observed multinomial counts of pp words across nn independent documents. In topic models, the p×np\times n expected word frequency matrix is assumed to be factorized as a p×Kp\times K word-topic matrix AA and a K×nK\times n topic-document matrix TT. Since columns of both matrices represent conditional probabilities belonging to probability simplices, columns of AA are viewed as pp-dimensional mixture components that are common to all documents while columns of TT are viewed as the KK-dimensional mixture weights that are document specific and are allowed to be sparse. The main interest is to provide sharp, finite sample, 1\ell_1-norm convergence rates for estimators of the mixture weights TT when AA is either known or unknown. For known AA, we suggest MLE estimation of TT. Our non-standard analysis of the MLE not only establishes its 1\ell_1 convergence rate, but reveals a remarkable property: the MLE, with no extra regularization, can be exactly sparse and contain the true zero pattern of TT. We further show that the MLE is both minimax optimal and adaptive to the unknown sparsity in a large class of sparse topic distributions. When AA is unknown, we estimate TT by optimizing the likelihood function corresponding to a plug in, generic, estimator A^\hat{A} of AA. For any estimator A^\hat{A} that satisfies carefully detailed conditions for proximity to AA, the resulting estimator of TT is shown to retain the properties established for the MLE. The ambient dimensions KK and pp are allowed to grow with the sample sizes. Our application is to the estimation of 1-Wasserstein distances between document generating distributions. We propose, estimate and analyze new 1-Wasserstein distances between two probabilistic document representations.

View on arXiv
Comments on this paper