ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.00328
79
36

Variational Inference via χχχ-Upper Bound Minimization

1 November 2016
Adji Bousso Dieng
Dustin Tran
Rajesh Ranganath
John Paisley
David M. Blei
    BDL
ArXivPDFHTML
Abstract

Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions qqq and finds the closest member to the exact posterior ppp. Closeness is usually measured via a divergence D(q∣∣p)D(q || p)D(q∣∣p) from qqq to ppp. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes Dχ(p∣∣q)D_{\chi}(p || q)Dχ​(p∣∣q), the χ\chiχ-divergence from ppp to qqq. CHIVI minimizes an upper bound of the model evidence, which we term the χ\chiχ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance.

View on arXiv
Comments on this paper