ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02424
413
74
v1v2v3 (latest)

Variational Wasserstein gradient flow

International Conference on Machine Learning (ICML), 2021
4 December 2021
JiaoJiao Fan
Amirhossein Taghvaei
Amirhossein Taghvaei
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)Github (10328★)
Main:9 Pages
17 Figures
Bibliography:4 Pages
10 Tables
Appendix:31 Pages
Abstract

The gradient flow of a function over the space of probability densities with respect to the Wasserstein metric often exhibits nice properties and has been utilized in several machine learning applications. The standard approach to compute the Wasserstein gradient flow is the finite difference which discretizes the underlying space over a grid, and is not scalable. In this work, we propose a scalable proximal gradient type algorithm for Wasserstein gradient flow. The key of our method is a variational formulation of the objective function, which makes it possible to realize the JKO proximal map through a primal-dual optimization. This primal-dual problem can be efficiently solved by alternatively updating the parameters in the inner and outer loops. Our framework covers all the classical Wasserstein gradient flows including the heat equation and the porous medium equation. We demonstrate the performance and scalability of our algorithm with several numerical examples.

View on arXiv
Comments on this paper