Deep Distributionally Robust Learning for Calibrated Uncertainties under Domain Shift
- OODUQCV

We propose a deep distributionally robust learning framework for calibrated uncertainties under domain shifts. We consider cases where the source (training) distribution differs significantly from the target (test) distribution. In addition to the standard class predictor, our framework contains a binary domain classifier which estimates the density ratio between the source and target domains. We incorporate both with neural networks and train them end-to-end. The framework is demonstrated to generate calibrated uncertainties that benefit many downstream tasks, including unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) where methods such as self-training and FixMatch use uncertainties to select confident pseudo-labels. Our experiments show that the introduction of DRL to these methods leads to significant improvements in cross-domain performance. We also demonstrate that the produced density ratio estimates show agreement with the human selection frequencies, suggesting a match with the human perceived uncertainties. The source code of this work will be made publicly available.
View on arXiv