An improvement of the adaptive rejection Metropolis sampling algorithm

A suitable choice of a proposal density in a Markov Chain Monte Carlo algorithm, for example, the Metropolis-Hastings (MH) algorithm, is a crucial factor in the convergence of the chain. Adaptive rejection Metropolis sampling (ARMS) is a famous MH scheme with adaptive proposals used to generate samples from one-dimensional target densities. Usually it is applied within a Gibbs sampler to draw efficiently from the full conditional distributions. In this work, we pinpoint a certain drawback in the adaptive procedure of ARMS and propose two improved adaptive schemes. The first one satisfies needed to diminishing adaptation condition needed to assure the convergence of the Markov chain. The second one is an adaptive independent MH algorithm with the ability to learn from all previous samples except the current state of the chain, so that the convergence to the invariant density is guaranteed. The new schemes improve the adaptive strategy of ARMS and, as a consequence, the complexity in the construction of the sequence of proposals can be also simplified. Numerical results show that the new techniques provide better performance w.r.t. the standard ARMS structure.
View on arXiv