There is a growing interest in learning how the distribution of a response variable changes with a set of predictors. Bayesian nonparametric dependent mixture models provide a flexible approach to address this goal. However, several formulations require computationally demanding algorithms for posterior inference. Motivated by this issue, we study a class of predictor-dependent infinite mixture models, which relies on a simple representation of the stick-breaking prior via sequential logistic regressions. This formulation maintains the same desirable properties of popular predictor-dependent stick-breaking priors, and leverages a recent P\ólya-gamma data augmentation to facilitate the implementation of several computational methods for posterior inference. These routines include Markov chain Monte Carlo via Gibbs sampling, expectation-maximization algorithms, and mean-field variational Bayes for scalable inference, thereby stimulating a wider implementation of Bayesian density regression by practitioners. The algorithms associated with these methods are presented in detail and tested in a toxicology study.
View on arXiv