ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.15067
74
1

Linear convergence of proximal descent schemes on the Wasserstein space

22 November 2024
Razvan-Andrei Lascu
Mateusz B. Majka
David Siska
Łukasz Szpruch
ArXivPDFHTML
Abstract

We investigate proximal descent methods, inspired by the minimizing movement scheme introduced by Jordan, Kinderlehrer and Otto, for optimizing entropy-regularized functionals on the Wasserstein space. We establish linear convergence under flat convexity assumptions, thereby relaxing the common reliance on geodesic convexity. Our analysis circumvents the need for discrete-time adaptations of the Evolution Variational Inequality (EVI). Instead, we leverage a uniform logarithmic Sobolev inequality (LSI) and the entropy "sandwich" lemma, extending the analysis from arXiv:2201.10469 and arXiv:2202.01009. The major challenge in the proof via LSI is to show that the relative Fisher information I(⋅∣π)I(\cdot|\pi)I(⋅∣π) is well-defined at every step of the scheme. Since the relative entropy is not Wasserstein differentiable, we prove that along the scheme the iterates belong to a certain class of Sobolev regularity, and hence the relative entropy KL⁡(⋅∣π)\operatorname{KL}(\cdot|\pi)KL(⋅∣π) has a unique Wasserstein sub-gradient, and that the relative Fisher information is indeed finite.

View on arXiv
Comments on this paper