Statistically efficient thinning of a Markov chain sampler

It is common to subsample Markov chain output to reduce the storage burden. Geyer (1992) shows that discarding out of every observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning MCMC output cannot improve statistical efficiency. Here we suppose that it costs one unit of time to advance a Markov chain and then units of time to compute a sampled quantity of interest. For a thinned process, that cost is incurred less often, so it can be advanced through more stages. Here we provide examples to show that thinning will improve statistical efficiency if is large and the sample autocorrelations decay slowly enough. If the lag autocorrelations of a scalar measurement satisfy , then there is always a at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with for some . For an AR(1) process it is possible to compute the most efficient subsampling frequency . The optimal grows rapidly as increases towards . The resulting efficiency gain depends primarily on , not . Taking (no thinning) is optimal when . For it is optimal if and only if . This efficiency gain never exceeds . This paper also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes.
View on arXiv