65
169

On the Stability of Sequential Monte Carlo Methods in High Dimensions

Abstract

We investigate the stability of a Sequential Monte Carlo (SMC) method applied to the problem of sampling from a target distribution on Rd\mathbb{R}^d for large dd. It is well known that using a single importance sampling step one produces an approximation for the target that deteriorates as the dimension dd increases, unless the number of Monte Carlo samples NN increases at an exponential rate in dd. We show that this degeneracy can be avoided by introducing a sequence of artificial targets, starting from a `simple' density and moving to the one of interest, using an SMC method to sample from the sequence. Using this class of SMC methods with a fixed number of samples, one can produce an approximation for which the effective sample size (ESS) converges to a random variable εN\varepsilon_N as dd\rightarrow\infty with 1<εN<N1<\varepsilon_{N}<N. The convergence is achieved with a computational cost proportional to Nd2Nd^2. If εNN\varepsilon_N\ll N, we can raise its value by introducing a number of resampling steps, say mm (where mm is independent of dd). In this case, ESS converges to a random variable εN,m\varepsilon_{N,m} as dd\rightarrow\infty and limmεN,m=N\lim_{m\to\infty}\varepsilon_{N,m}=N. Also, we show that the Monte Carlo error for estimating a fixed dimensional marginal expectation is of order 1N\frac{1}{\sqrt{N}} uniformly in dd. The results imply that, in high dimensions, SMC algorithms can efficiently control the variability of the importance sampling weights and estimate fixed dimensional marginals at a cost which is less than exponential in dd and indicate that, in high dimensions, resampling leads to a reduction in the Monte Carlo error and increase in the ESS.

View on arXiv
Comments on this paper