Although discrete mixture modeling has formed the backbone of the literature on Bayesian density estimation, there are some well known disadvantages. We propose an alternative class of priors based on random nonlinear functions of a uniform latent variable with an additive residual. The induced prior for the density is shown to have desirable properties including ease of centering on an initial guess for the density, large support, posterior consistency and straightforward computation via Gibbs sampling. Some advantages over discrete mixtures, such as Dirichlet process mixtures of Gaussian kernels, are discussed and illustrated via simulations and an epidemiology application.
View on arXiv