On the convergence complexity of Gibbs samplers for a family of simple Bayesian random effects models

The emergence of big data has led to so-called convergence complexity analysis, which is the study of how Markov chain Monte Carlo (MCMC) algorithms behave as the sample size, , and/or the number of parameters, , in the underlying data set increase. This type of analysis is often quite challenging, in part because existing results for fixed and are simply not sharp enough to yield good asymptotic results. One of the first convergence complexity results for an MCMC algorithm on a continuous state space is due to Yang and Rosenthal (2019), who established a mixing time result for a Gibbs sampler (for a simple Bayesian random effects model) that was introduced and studied by Rosenthal (1996). The asymptotic behavior of the spectral gap of this Gibbs sampler is, however, still unknown. We use a recently developed simulation technique (Qin et. al., 2019) to provide substantial numerical evidence that the gap is bounded away from 0 as . We also establish a pair of rigorous convergence complexity results for two different Gibbs samplers associated with a generalization of the random effects model considered by Rosenthal (1996). Our results show that, under strong regularity conditions, the spectral gaps of these Gibbs samplers converge to 1 as the sample size increases.
View on arXiv