5
0

Causality-Inspired Robustness for Nonlinear Models via Representation Learning

Abstract

Distributional robustness is a central goal of prediction algorithms due to the prevalent distribution shifts in real-world data. The prediction model aims to minimize the worst-case risk among a class of distributions, a.k.a., an uncertainty set. Causality provides a modeling framework with a rigorous robustness guarantee in the above sense, where the uncertainty set is data-driven rather than pre-specified as in traditional distributional robustness optimization. However, current causality-inspired robustness methods possess finite-radius robustness guarantees only in the linear settings, where the causal relationships among the covariates and the response are linear. In this work, we propose a nonlinear method under a causal framework by incorporating recent developments in identifiable representation learning and establish a distributional robustness guarantee. To our best knowledge, this is the first causality-inspired robustness method with such a finite-radius robustness guarantee in nonlinear settings. Empirical validation of the theoretical findings is conducted on both synthetic data and real-world single-cell data, also illustrating that finite-radius robustness is crucial.

View on arXiv
@article{šola2025_2505.12868,
  title={ Causality-Inspired Robustness for Nonlinear Models via Representation Learning },
  author={ Marin Šola and Peter Bühlmann and Xinwei Shen },
  journal={arXiv preprint arXiv:2505.12868},
  year={ 2025 }
}
Comments on this paper