Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2504.03736
Cited By
Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators
1 April 2025
Teodor Chiaburu
Felix Bießmann
Frank Haußer
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators"
15 / 15 papers shown
Title
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
70
115
0
17 Nov 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
56
176
0
14 Feb 2022
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
66
181
0
10 Nov 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
75
161
0
11 Aug 2020
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
120
209
0
27 Oct 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
141
1,970
0
08 Oct 2018
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
Weili Nie
Yang Zhang
Ankit B. Patel
FAtt
127
151
0
18 May 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
219
1,842
0
30 Nov 2017
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
101
687
0
02 Nov 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
6,015
0
04 Mar 2017
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
82
789
0
05 May 2016
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
248
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
595
15,893
0
12 Nov 2013
1