125
0
v1v2 (latest)

Neural Conditional Probability for Uncertainty Quantification

Main:10 Pages
7 Figures
Bibliography:4 Pages
5 Tables
Appendix:20 Pages
Abstract

We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation capabilities of neural networks, NCP efficiently handles a wide variety of com- plex probability distributions. We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy. In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods. This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve competitive results, even in the face of more complex architectures.

View on arXiv
@article{kostic2025_2407.01171,
  title={ Neural Conditional Probability for Uncertainty Quantification },
  author={ Vladimir R. Kostic and Karim Lounici and Gregoire Pacreau and Pietro Novelli and Giacomo Turri and Massimiliano Pontil },
  journal={arXiv preprint arXiv:2407.01171},
  year={ 2025 }
}
Comments on this paper