ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.08919
29
16

Debiaser Beware: Pitfalls of Centering Regularized Transport Maps

17 February 2022
Aram-Alexandre Pooladian
Marco Cuturi
Jonathan Niles-Weed
    OT
ArXivPDFHTML
Abstract

Estimating optimal transport (OT) maps (a.k.a. Monge maps) between two measures PPP and QQQ is a problem fraught with computational and statistical challenges. A promising approach lies in using the dual potential functions obtained when solving an entropy-regularized OT problem between samples PnP_nPn​ and QnQ_nQn​, which can be used to recover an approximately optimal map. The negentropy penalization in that scheme introduces, however, an estimation bias that grows with the regularization strength. A well-known remedy to debias such estimates, which has gained wide popularity among practitioners of regularized OT, is to center them, by subtracting auxiliary problems involving PnP_nPn​ and itself, as well as QnQ_nQn​ and itself. We do prove that, under favorable conditions on PPP and QQQ, debiasing can yield better approximations to the Monge map. However, and perhaps surprisingly, we present a few cases in which debiasing is provably detrimental in a statistical sense, notably when the regularization strength is large or the number of samples is small. These claims are validated experimentally on synthetic and real datasets, and should reopen the debate on whether debiasing is needed when using entropic optimal transport.

View on arXiv
Comments on this paper