156
114

Stochastic Expectation Propagation

Jose Miguel Hernandez-Lobato
Abstract

Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local-approximations that are iteratively refned for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large-scale datasets. However, EP has a crucial limitation in this context: the number approximating factors need to increase with the number of data-points, N, which entails a large computational burden. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of synthetic and real-world data indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N.

View on arXiv
Comments on this paper