ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07889
91
17
v1v2v3 (latest)

Entropy Rate Estimation for Markov Chains with Large State Space

22 February 2018
Yanjun Han
Jiantao Jiao
Chuan-Zheng Lee
Tsachy Weissman
Yihong Wu
Tiancheng Yu
    OT
ArXiv (abs)PDFHTML
Abstract

Estimating the entropy based on data is one of the prototypical problems in distribution property testing and estimation. For estimating the Shannon entropy of a distribution on SSS elements with independent samples, [Paninski2004] showed that the sample complexity is sublinear in SSS, and [Valiant--Valiant2011] showed that consistent estimation of Shannon entropy is possible if and only if the sample size nnn far exceeds Slog⁡S\frac{S}{\log S}logSS​. In this paper we consider the problem of estimating the entropy rate of a stationary reversible Markov chain with SSS states from a sample path of nnn observations. We show that: (1) As long as the Markov chain mixes not too slowly, i.e., the relaxation time is at most O(Sln⁡3S)O(\frac{S}{\ln^3 S})O(ln3SS​), consistent estimation is achievable when n≫S2log⁡Sn \gg \frac{S^2}{\log S}n≫logSS2​. (2) As long as the Markov chain has some slight dependency, i.e., the relaxation time is at least 1+Ω(ln⁡2SS)1+\Omega(\frac{\ln^2 S}{\sqrt{S}})1+Ω(S​ln2S​), consistent estimation is impossible when n≲S2log⁡Sn \lesssim \frac{S^2}{\log S}n≲logSS2​. Under both assumptions, the optimal estimation accuracy is shown to be Θ(S2nlog⁡S)\Theta(\frac{S^2}{n \log S})Θ(nlogSS2​). In comparison, the empirical entropy rate requires at least Ω(S2)\Omega(S^2)Ω(S2) samples to be consistent, even when the Markov chain is memoryless. In addition to synthetic experiments, we also apply the estimators that achieve the optimal sample complexity to estimate the entropy rate of the English language in the Penn Treebank and the Google One Billion Words corpora, which provides a natural benchmark for language modeling and relates it directly to the widely used perplexity measure.

View on arXiv
Comments on this paper