194

Lipschitz standardization for multivariate learning

Abstract

In this work, we focus on the use of gradient-based optimization to perform approximate inference in multivariate probabilistic models. We show that these approaches suffer from the negative transfer problem, broadly studied in the context of multi-task learning. As a result, they often infer models that fit only a subset of the observed variables, overlooking the rest. Unfortunately, as the likelihood model is fixed, adaptive solutions from the multi-task literature do not apply in the probabilistic setting. Yet, we show that likelihood functions, and specifically their local Lipschitz smoothness, may be modified through the data via data standardization. In our analysis, we show that existing data standardization techniques often ease fairer learning under common continuous likelihood functions. Based on this finding, we propose a novel data preprocessing method, Lipschitz standardization, that results in a fairer learning process in mixed continuous and discrete variable models. Our experiments on different real-world datasets show that Lipschitz standardization leads to more accurate models for missing data imputation than the ones inferred using standard data preprocessing techniques. The models and datasets employed in the experiments can be found in https://github.com/adrianjav/lipschitz-standardization.

View on arXiv
Comments on this paper