Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.05030
Cited By
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
11 August 2020
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reliable Post hoc Explanations: Modeling Uncertainty in Explainability"
50 / 97 papers shown
Title
Fixed Point Explainability
Emanuele La Malfa
Jon Vadillo
Marco Molinari
Michael Wooldridge
14
0
0
18 May 2025
Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
Weiqing Li
Yue Xu
Yuefeng Li
Yinghui Huang
30
0
0
14 May 2025
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
Volkan Bakir
Polat Goktas
Sureyya Akyuz
57
0
0
26 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
48
0
0
11 Apr 2025
Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators
Teodor Chiaburu
Felix Bießmann
Frank Haußer
25
0
0
01 Apr 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
105
0
0
11 Feb 2025
An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI Services
Zerui Wang
Yan Liu
Jun Huang
59
1
0
05 Nov 2024
Explainability in AI Based Applications: A Framework for Comparing Different Techniques
Arne Grobrugge
Nidhi Mishra
Johannes Jakubik
G. Satzger
107
1
0
28 Oct 2024
Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions
Helena Lofstrom
Tuwe Löfström
Johan Hallberg Szabadvary
40
0
0
07 Oct 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
55
0
0
21 Aug 2024
Fooling SHAP with Output Shuffling Attacks
Jun Yuan
Aritra Dasgupta
40
1
0
12 Aug 2024
From Feature Importance to Natural Language Explanations Using LLMs with RAG
Sule Tekkesinoglu
Lars Kunze
FAtt
39
1
0
30 Jul 2024
Robustness of Explainable Artificial Intelligence in Industrial Process Modelling
Benedikt Kantz
Clemens Staudinger
C. Feilmayr
Johannes Wachlmayr
Alexander Haberl
Stefan Schuster
Franz Pernkopf
33
3
0
12 Jul 2024
Towards Understanding Sensitive and Decisive Patterns in Explainable AI: A Case Study of Model Interpretation in Geometric Deep Learning
Jiajun Zhu
Siqi Miao
Rex Ying
Pan Li
45
1
0
30 Jun 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
50
3
0
26 Jun 2024
CAT: Interpretable Concept-based Taylor Additive Models
Viet Duong
Qiong Wu
Zhengyi Zhou
Hongjue Zhao
Chenxiang Luo
Eric Zavesky
Huaxiu Yao
Huajie Shao
FAtt
29
2
0
25 Jun 2024
ProtoS-ViT: Visual foundation models for sparse self-explainable classifications
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
ViT
28
6
0
14 Jun 2024
Why do explanations fail? A typology and discussion on failures in XAI
Clara Bove
Thibault Laugel
Marie-Jeanne Lesot
C. Tijus
Marcin Detyniecki
35
2
0
22 May 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
34
1
0
17 May 2024
Post-hoc and manifold explanations analysis of facial expression data based on deep learning
Yang Xiao
31
0
0
29 Apr 2024
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
47
6
0
03 Apr 2024
Segmentation, Classification and Interpretation of Breast Cancer Medical Images using Human-in-the-Loop Machine Learning
David Vázquez-Lema
E. Mosqueira-Rey
Elena Hernández-Pereira
Carlos Fernández-Lozano
Fernando Seara-Romera
Jorge Pombo-Otero
LM&MA
38
1
0
29 Mar 2024
Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach
José Bobes-Bascarán
E. Mosqueira-Rey
Á. Fernández-Leal
Elena Hernández-Pereira
David Alonso-Ríos
V. Moret-Bonillo
Israel Figueirido-Arnoso
Y. Vidal-Ínsua
ELM
29
0
0
28 Mar 2024
Sanity Checks for Explanation Uncertainty
Matias Valdenegro-Toro
Mihir Mulye
FAtt
41
0
0
25 Mar 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCV
FAtt
43
0
0
25 Mar 2024
QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations
J. Duell
M. Seisenberger
Hsuan-Wei Fu
Xiuyi Fan
UQCV
BDL
42
1
0
27 Feb 2024
Investigating the Impact of Model Instability on Explanations and Uncertainty
Sara Vera Marjanović
Isabelle Augenstein
Christina Lioma
AAML
50
0
0
20 Feb 2024
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
39
2
0
15 Feb 2024
The Duet of Representations and How Explanations Exacerbate It
Charles Wan
Rodrigo Belo
Leid Zejnilovic
Susana Lavado
CML
FAtt
26
1
0
13 Feb 2024
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
Yongchen Zhou
Richard Jiang
26
3
0
07 Feb 2024
Variational Shapley Network: A Probabilistic Approach to Self-Explaining Shapley values with Uncertainty Quantification
Mert Ketenci
Inigo Urteaga
Victor Alfonso Rodriguez
Noémie Elhadad
A. Perotte
FAtt
29
0
0
06 Feb 2024
Understanding Disparities in Post Hoc Machine Learning Explanation
Vishwali Mhasawade
Salman Rahman
Zoe Haskell-Craig
R. Chunara
34
4
0
25 Jan 2024
The Distributional Uncertainty of the SHAP score in Explainable Machine Learning
Santiago Cifuentes
L. Bertossi
Nina Pardal
S. Abriola
Maria Vanina Martinez
Miguel Romero
TDI
FAtt
18
0
0
23 Jan 2024
Towards Modeling Uncertainties of Self-explaining Neural Networks via Conformal Prediction
Wei Qian
Chenxu Zhao
Yangyi Li
Fenglong Ma
Chao Zhang
Mengdi Huai
UQCV
52
2
0
03 Jan 2024
Rethinking Robustness of Model Attributions
Sandesh Kamath
Sankalp Mittal
Amit Deshpande
Vineeth N. Balasubramanian
32
2
0
16 Dec 2023
Generating Explanations to Understand and Repair Embedding-based Entity Alignment
Xiaobin Tian
Zequn Sun
Wei Hu
28
6
0
08 Dec 2023
Uncertainty in Additive Feature Attribution methods
Abhishek Madaan
Tanya Chowdhury
Neha Rana
James Allan
Tanmoy Chakraborty
34
0
0
29 Nov 2023
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
26
3
0
21 Nov 2023
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Max Torop
A. Masoomi
Davin Hill
Kivanc Kose
Stratis Ioannidis
Jennifer Dy
33
4
0
01 Nov 2023
Refutation of Shapley Values for XAI -- Additional Evidence
Xuanxiang Huang
Sasha Rubin
AAML
34
4
0
30 Sep 2023
A Refutation of Shapley Values for Explainability
Xuanxiang Huang
Sasha Rubin
FAtt
26
3
0
06 Sep 2023
Calibrated Explanations for Regression
Tuwe Löfström
Helena Lofstrom
Ulf Johansson
Cecilia Sönströd
Rudy Matela
XAI
FAtt
26
2
0
30 Aug 2023
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
41
4
0
09 Aug 2023
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
34
3
0
28 Jul 2023
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
32
0
0
18 Jul 2023
Stability Guarantees for Feature Attributions with Multiplicative Smoothing
Anton Xue
Rajeev Alur
Eric Wong
46
5
0
12 Jul 2023
On Formal Feature Attribution and Its Approximation
Jinqiang Yu
Alexey Ignatiev
Peter Stuckey
35
8
0
07 Jul 2023
CLIMAX: An exploration of Classifier-Based Contrastive Explanations
Praharsh Nanavati
Ranjitha Prasad
42
0
0
02 Jul 2023
Explainability is NOT a Game
Sasha Rubin
Xuanxiang Huang
28
17
0
27 Jun 2023
Are Good Explainers Secretly Human-in-the-Loop Active Learners?
Emma Nguyen
Abhishek Ghose
21
1
0
24 Jun 2023
1
2
Next