48
0

Quantifying Adversarial Uncertainty in Evidential Deep Learning using Conflict Resolution

Main:9 Pages
15 Figures
Bibliography:3 Pages
11 Tables
Appendix:16 Pages
Abstract

Reliability of deep learning models is critical for deployment in high-stakes applications, where out-of-distribution or adversarial inputs may lead to detrimental outcomes. Evidential Deep Learning, an efficient paradigm for uncertainty quantification, models predictions as Dirichlet distributions of a single forward pass. However, EDL is particularly vulnerable to adversarially perturbed inputs, making overconfident errors. Conflict-aware Evidential Deep Learning (C-EDL) is a lightweight post-hoc uncertainty quantification approach that mitigates these issues, enhancing adversarial and OOD robustness without retraining. C-EDL generates diverse, task-preserving transformations per input and quantifies representational disagreement to calibrate uncertainty estimates when needed. C-EDL's conflict-aware prediction adjustment improves detection of OOD and adversarial inputs, maintaining high in-distribution accuracy and low computational overhead. Our experimental evaluation shows that C-EDL significantly outperforms state-of-the-art EDL variants and competitive baselines, achieving substantial reductions in coverage for OOD data (up to 55%) and adversarial data (up to 90%), across a range of datasets, attack types, and uncertainty metrics.

View on arXiv
@article{barker2025_2506.05937,
  title={ Quantifying Adversarial Uncertainty in Evidential Deep Learning using Conflict Resolution },
  author={ Charmaine Barker and Daniel Bethell and Simos Gerasimou },
  journal={arXiv preprint arXiv:2506.05937},
  year={ 2025 }
}
Comments on this paper