Variational inference is a popular method for approximating the posterior distribution of hierarchical Bayesian models. It is well-recognized in the literature that the choice of the approximation family and the regularity properties of the posterior strongly influence the efficiency and accuracy of variational methods. While model-specific conjugate approximations offer simplicity, they often converge slowly and may yield poor approximations. Non-conjugate approximations instead are more flexible but typically require the calculation of expensive multidimensional integrals. This study focuses on Bayesian regression models that use possibly non-differentiable loss functions to measure prediction misfit. The data behavior is modeled using a linear predictor, potentially transformed using a bijective link function. Examples include generalized linear models, mixed additive models, support vector machines, and quantile regression. To address the limitations of non-conjugate settings, the study proposes an efficient non-conjugate variational message passing method for approximate posterior inference, which only requires the calculation of univariate numerical integrals when analytical solutions are not available. The approach does not require differentiability, conjugacy, or model-specific data-augmentation strategies, thereby naturally extending to models with non-conjugate likelihood functions. Additionally, a stochastic implementation is provided to handle large-scale data problems. The proposed method's performances are evaluated through extensive simulations and real data examples. Overall, the results highlight the effectiveness of the proposed variational message passing method, demonstrating its computational efficiency and approximation accuracy as an alternative to existing methods in Bayesian inference for regression models.
View on arXiv