ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.08593
64
13
v1v2v3 (latest)

Tensor Monte Carlo: particle methods for the GPU era

22 June 2018
Laurence Aitchison
    BDLDRL
ArXiv (abs)PDFHTML
Abstract

Multi-sample, importance-weighted variational autoencoders (IWAE) give tighter bounds and more accurate uncertainty estimates than standard variational autoencoders (VAE). However, they scale poorly: as the latent dimensionality grows, they require exponentially many samples to retain the benefits of importance weighting. While sequential Monte-Carlo (SMC) can address this problem, it is prohibitively slow because the resampling step imposes sequential structure which cannot be parallelised, and the resampling step is non-differentiable which is problematic when learning approximate posteriors. To address these issues, we developed tensor Monte-Carlo (TMC) which gives exponentially many importance samples by separately drawing KKK samples for each of the nnn latent variables, then averaging over all KnK^nKn possible combinations. While the sum over exponentially many terms might seem to be intractable, in many cases it can be computed efficiently as a series of tensor inner-products. Finally, we relate TMC to classical message passing allowing us to combine exact marginalisation over discrete latent variables with importance sampling over continuous latent variables.

View on arXiv
Comments on this paper