ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03443
77
0

Conceptualizing Uncertainty

5 March 2025
Isaac Roberts
Alexander Schulz
Sarah Schroeder
Fabian Hinder
Barbara Hammer
    UD
ArXivPDFHTML
Abstract

Uncertainty in machine learning refers to the degree of confidence or lack thereof in a model's predictions. While uncertainty quantification methods exist, explanations of uncertainty, especially in high-dimensional settings, remain an open challenge. Existing work focuses on feature attribution approaches which are restricted to local explanations. Understanding uncertainty, its origins, and characteristics on a global scale is crucial for enhancing interpretability and trust in a model's predictions. In this work, we propose to explain the uncertainty in high-dimensional data classification settings by means of concept activation vectors which give rise to local and global explanations of uncertainty. We demonstrate the utility of the generated explanations by leveraging them to refine and improve our model.

View on arXiv
@article{roberts2025_2503.03443,
  title={ Conceptualizing Uncertainty },
  author={ Isaac Roberts and Alexander Schulz and Sarah Schroeder and Fabian Hinder and Barbara Hammer },
  journal={arXiv preprint arXiv:2503.03443},
  year={ 2025 }
}
Comments on this paper