ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.17516
32
1

QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations

27 February 2024
J. Duell
M. Seisenberger
Hsuan-Wei Fu
Xiuyi Fan
    UQCV
    BDL
ArXivPDFHTML
Abstract

Deep Neural Networks (DNNs) stand out as one of the most prominent approaches within the Machine Learning (ML) domain. The efficacy of DNNs has surged alongside recent increases in computational capacity, allowing these approaches to scale to significant complexities for addressing predictive challenges in big data. However, as the complexity of DNN models rises, interpretability diminishes. In response to this challenge, explainable models such as Adversarial Gradient Integration (AGI) leverage path-based gradients provided by DNNs to elucidate their decisions. Yet the performance of path-based explainers can be compromised when gradients exhibit irregularities during out-of-distribution path traversal. In this context, we introduce Quantified Uncertainty Counterfactual Explanations (QUCE), a method designed to mitigate out-of-distribution traversal by minimizing path uncertainty. QUCE not only quantifies uncertainty when presenting explanations but also generates more certain counterfactual examples. We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples.

View on arXiv
@article{duell2025_2402.17516,
  title={ QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations },
  author={ Jamie Duell and Monika Seisenberger and Hsuan Fu and Xiuyi Fan },
  journal={arXiv preprint arXiv:2402.17516},
  year={ 2025 }
}
Comments on this paper