Weak Convergence of Markov Chain Monte Carlo Methods and its Application to Regular Gibbs Sampler

In this paper, we introduce the notion of efficiency (consistency) and examine some asymptotic properties of Markov chain Monte Carlo methods. We apply these results to the Gibbs sampler for independent and identically distributed observations. More precisely, we show that if both the sample size and the running time of the Gibbs sampler tend to infinity, and if the initial guess is not far from the true parameter, the Gibbs sampler estimator tends to the Bayesian estimator. This is a local property of the Gibbs sampler, which may be, in some cases, more essential than the global properties to describe its behavior. The advantages of using the local properties are the generality of the underling model and the existence of simple equivalent Gibbs sampler. Those yield a simple regularity condition, illustrate the meaning of burn-in method, and suggest the reason for non-regular behaviors, which provides useful insight into the problem of how to construct efficient algorithms for such as mixture model.
View on arXiv