20
0

On the Nonconvexity of Push-Forward Constraints and Its Consequences in Machine Learning

Abstract

The push-forward operation enables one to redistribute a probability measure through a deterministic map. It plays a key role in statistics and optimization: many learning problems (notably from optimal transport, generative modeling, and algorithmic fairness) include constraints or penalties framed as push-forward conditions on the model. However, the literature lacks general theoretical insights on the (non)convexity of such constraints and its consequences on the associated learning problems. This paper aims at filling this gap. In the first part, we provide a range of sufficient and necessary conditions for the (non)convexity of two sets of functions: the maps transporting one probability measure to another and the maps inducing equal output distributions across distinct probability measures. This highlights that for most probability measures, these push-forward constraints are not convex. In the second part, we show how this result implies critical limitations on the design of convex optimization problems for learning generative models or groupwise fair predictors. This work will hopefully help researchers and practitioners have a better understanding of the critical impact of push-forward conditions onto convexity.

View on arXiv
@article{lara2025_2403.07471,
  title={ On the Nonconvexity of Push-Forward Constraints and Its Consequences in Machine Learning },
  author={ Lucas de Lara and Mathis Deronzier and Alberto González-Sanz and Virgile Foy },
  journal={arXiv preprint arXiv:2403.07471},
  year={ 2025 }
}
Comments on this paper