Proximal optimal transport divergences

We introduce proximal optimal transport divergence, a novel discrepancy measure that interpolates between information divergences and optimal transport distances via an infimal convolution formulation. This divergence provides a principled foundation for optimal transport proximals and proximal optimization methods frequently used in generative modeling. We explore its mathematical properties, including smoothness, boundedness, and computational tractability, and establish connections to primal-dual formulation and adversarial learning. Building on the Benamou-Brenier dynamic formulation of optimal transport cost, we also establish a dynamic formulation for proximal OT divergences. The resulting dynamic formulation is a first order mean-field game whose optimality conditions are governed by a pair of nonlinear partial differential equations, a backward Hamilton-Jacobi and a forward continuity partial differential equations. Our framework generalizes existing approaches while offering new insights and computational tools for generative modeling, distributional optimization, and gradient-based learning in probability spaces.
View on arXiv@article{baptista2025_2505.12097, title={ Proximal optimal transport divergences }, author={ Ricardo Baptista and Panagiota Birmpa and Markos A. Katsoulakis and Luc Rey-Bellet and Benjamin J. Zhang }, journal={arXiv preprint arXiv:2505.12097}, year={ 2025 } }