Particle Gibbs algorithms for Markov jump processes

In the present paper we propose a new MCMC algorithm for sampling from the posterior distribution of hidden trajectory of a Markov jump process. Our algorithm is based on the idea of exploiting virtual jumps, introduced by Rao and Teh (2013). The main novelty is that our algorithm uses particle Gibbs with ancestor sampling to update the skeleton, while Rao and Teh use forward filtering backward sampling (FFBS). In contrast to previous methods our algorithm can be implemented even if the state space is infinite. In addition, the cost of a single step of the proposed algorithm does not depend on the size of the state space. The computational cost of our methood is of order , where is the number of particles used in the PGAS algorithm and is the expected number of jumps (together with virtual ones). The cost of the algorithm of Rao and Teh is of order , where is the size of the state space. Simulation results show that our algorithm with PGAS converges slightly slower than the algorithm with FFBS, if the size of the state space is not big. However, if the size of the state space increases, the proposed method outperforms existing ones. We give special attention to a hierarchical version of our algorithm which can be applied to continuous time Bayesian networks (CTBNs).
View on arXiv