ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.08052
11
13

Variance Reduction in Stochastic Particle-Optimization Sampling

20 November 2018
Jianyi Zhang
Yang Zhao
Changyou Chen
    OT
ArXivPDFHTML
Abstract

Stochastic particle-optimization sampling (SPOS) is a recently-developed scalable Bayesian sampling framework that unifies stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) algorithms based on Wasserstein gradient flows. With a rigorous non-asymptotic convergence theory developed recently, SPOS avoids the particle-collapsing pitfall of SVGD. Nevertheless, variance reduction in SPOS has never been studied. In this paper, we bridge the gap by presenting several variance-reduction techniques for SPOS. Specifically, we propose three variants of variance-reduced SPOS, called SAGA particle-optimization sampling (SAGA-POS), SVRG particle-optimization sampling (SVRG-POS) and a variant of SVRG-POS which avoids full gradient computations, denoted as SVRG-POS+^++. Importantly, we provide non-asymptotic convergence guarantees for these algorithms in terms of 2-Wasserstein metric and analyze their complexities. Remarkably, the results show our algorithms yield better convergence rates than existing variance-reduced variants of stochastic Langevin dynamics, even though more space is required to store the particles in training. Our theory well aligns with experimental results on both synthetic and real datasets.

View on arXiv
Comments on this paper