ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.01602
  4. Cited By
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

3 February 2022
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
ArXivPDFHTML

Papers citing "The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective"

50 / 105 papers shown
Title
On the Connection between Game-Theoretic Feature Attributions and
  Counterfactual Explanations
On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Emanuele Albini
Shubham Sharma
Saumitra Mishra
Danial Dervovic
Daniele Magazzeni
FAtt
46
2
0
13 Jul 2023
Is Task-Agnostic Explainable AI a Myth?
Is Task-Agnostic Explainable AI a Myth?
Alicja Chaszczewicz
26
2
0
13 Jul 2023
Fighting the disagreement in Explainable Machine Learning with consensus
Fighting the disagreement in Explainable Machine Learning with consensus
A. Banegas-Luna
Carlos Martínez-Cortés
H. Sánchez
FaML
31
1
0
03 Jul 2023
The future of human-centric eXplainable Artificial Intelligence (XAI) is
  not post-hoc explanations
The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Vinitra Swamy
Jibril Frej
Tanja Kaser
34
14
0
01 Jul 2023
An Empirical Evaluation of the Rashomon Effect in Explainable Machine
  Learning
An Empirical Evaluation of the Rashomon Effect in Explainable Machine Learning
Sebastian Müller
Vanessa Toborek
Katharina Beckh
Matthias Jakobs
Christian Bauckhage
Pascal Welke
FAtt
25
15
0
27 Jun 2023
Manipulation Risks in Explainable AI: The Implications of the
  Disagreement Problem
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
S. Goethals
David Martens
Theodoros Evgeniou
36
4
0
24 Jun 2023
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
  They be Trusted?
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?
A. Brankovic
David Cook
Jessica Rahman
Wenjie Huang
Sankalp Khanna
25
1
0
21 Jun 2023
Consistent Explanations in the Face of Model Indeterminacy via
  Ensembling
Consistent Explanations in the Face of Model Indeterminacy via Ensembling
Dan Ley
Leonard Tang
Matthew Nazari
Hongjin Lin
Suraj Srinivas
Himabindu Lakkaraju
24
2
0
09 Jun 2023
Sound Explanation for Trustworthy Machine Learning
Sound Explanation for Trustworthy Machine Learning
Kai Jia
Pasapol Saowakon
L. Appelbaum
Martin Rinard
XAI
FAtt
FaML
21
2
0
08 Jun 2023
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing
  and Knowledge Discovery
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing and Knowledge Discovery
Patrick Wagner
Temesgen Mehari
Wilhelm Haverkamp
Nils Strodthoff
42
18
0
26 May 2023
Post Hoc Explanations of Language Models Can Improve Language Models
Post Hoc Explanations of Language Models Can Improve Language Models
Satyapriya Krishna
Jiaqi Ma
Dylan Slack
Asma Ghandeharioun
Sameer Singh
Himabindu Lakkaraju
ReLM
LRM
28
54
0
19 May 2023
Algorithmic Recourse with Missing Values
Algorithmic Recourse with Missing Values
Kentaro Kanamori
Takuya Takagi
Ken Kobayashi
Yuichi Ike
28
2
0
28 Apr 2023
Disagreement amongst counterfactual explanations: How transparency can
  be deceptive
Disagreement amongst counterfactual explanations: How transparency can be deceptive
Dieter Brughmans
Lissa Melis
David Martens
31
3
0
25 Apr 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
21
37
0
20 Apr 2023
Interpretable (not just posthoc-explainable) heterogeneous survivor
  bias-corrected treatment effects for assignment of postdischarge
  interventions to prevent readmissions
Interpretable (not just posthoc-explainable) heterogeneous survivor bias-corrected treatment effects for assignment of postdischarge interventions to prevent readmissions
Hongjing Xia
Joshua C. Chang
S. Nowak
Sonya Mahajan
R. Mahajan
Ted L. Chang
Carson C. Chow
38
1
0
19 Apr 2023
The XAISuite framework and the implications of explanatory system
  dissonance
The XAISuite framework and the implications of explanatory system dissonance
Shreyan Mitra
Leilani H. Gilpin
FAtt
33
1
0
15 Apr 2023
From Explanation to Action: An End-to-End Human-in-the-loop Framework
  for Anomaly Reasoning and Management
From Explanation to Action: An End-to-End Human-in-the-loop Framework for Anomaly Reasoning and Management
Xueying Ding
Nikita Seleznev
Senthil Kumar
C. Bayan Bruss
L. Akoglu
28
3
0
06 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
55
3
0
30 Mar 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
32
0
0
27 Mar 2023
Reckoning with the Disagreement Problem: Explanation Consensus as a
  Training Objective
Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective
Avi Schwarzschild
Max Cembalest
K. Rao
Keegan E. Hines
John P Dickerson
FAtt
17
5
0
23 Mar 2023
WebSHAP: Towards Explaining Any Machine Learning Models Anywhere
WebSHAP: Towards Explaining Any Machine Learning Models Anywhere
Zijie J. Wang
Duen Horng Chau
17
3
0
16 Mar 2023
Feature Importance Disparities for Data Bias Investigations
Feature Importance Disparities for Data Bias Investigations
Peter W. Chang
Leor Fishman
Seth Neel
26
2
0
03 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
46
38
0
01 Mar 2023
Function Composition in Trustworthy Machine Learning: Implementation
  Choices, Insights, and Questions
Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Manish Nagireddy
Moninder Singh
Samuel C. Hoffman
Evaline Ju
K. Ramamurthy
Kush R. Varshney
30
1
0
17 Feb 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
Five policy uses of algorithmic transparency and explainability
Five policy uses of algorithmic transparency and explainability
Matthew R. O’Shaughnessy
54
0
0
06 Feb 2023
Quantifying Context Mixing in Transformers
Quantifying Context Mixing in Transformers
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
A. Alishahi
168
24
0
30 Jan 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
20
0
0
25 Jan 2023
How Data Scientists Review the Scholarly Literature
How Data Scientists Review the Scholarly Literature
Sheshera Mysore
Mahmood Jasim
Haoru Song
Sarah Akbar
Andre Kenneth Chase Randall
Narges Mahyar
AI4CE
34
8
0
10 Jan 2023
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
47
39
0
24 Oct 2022
Conditional Feature Importance for Mixed Data
Conditional Feature Importance for Mixed Data
Kristin Blesch
David S. Watson
Marvin N. Wright
50
7
0
06 Oct 2022
From Shapley Values to Generalized Additive Models and back
From Shapley Values to Generalized Additive Models and back
Sebastian Bordt
U. V. Luxburg
FAtt
TDI
74
35
0
08 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for
  discharge placement to prevent avoidable all-cause readmissions or death
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
37
0
0
28 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
Learning Unsupervised Hierarchies of Audio Concepts
Learning Unsupervised Hierarchies of Audio Concepts
Darius Afchar
Romain Hennequin
Vincent Guigue
43
2
0
21 Jul 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
24
74
0
08 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
40
0
0
04 Jul 2022
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Leilani H. Gilpin
Andrew R. Paley
M. A. Alam
Sarah Spurlock
Kristian J. Hammond
XAI
26
6
0
27 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
29
141
0
22 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
30
8
0
07 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
37
87
0
02 Jun 2022
Transforming medical imaging with Transformers? A comparative review of
  key properties, current progresses, and future perspectives
Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives
Jun Li
Junyu Chen
Yucheng Tang
Ce Wang
Bennett A. Landman
S. K. Zhou
ViT
OOD
MedIm
23
21
0
02 Jun 2022
Unfooling Perturbation-Based Post Hoc Explainers
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
60
14
0
29 May 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAI
FAtt
56
16
0
17 May 2022
SIBILA: A novel interpretable ensemble of general-purpose machine
  learning models applied to medical contexts
SIBILA: A novel interpretable ensemble of general-purpose machine learning models applied to medical contexts
A. Banegas-Luna
Horacio Pérez-Sánchez
30
1
0
12 May 2022
A Song of (Dis)agreement: Evaluating the Evaluation of Explainable
  Artificial Intelligence in Natural Language Processing
A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Michael Neely
Stefan F. Schouten
Maurits J. R. Bleeker
Ana Lucic
XAI
21
16
0
09 May 2022
Explainable Artificial Intelligence for Bayesian Neural Networks:
  Towards trustworthy predictions of ocean dynamics
Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics
Mariana C. A. Clare
Maike Sonnewald
Redouane Lguensat
Julie Deshayes
Venkatramani Balaji
BDL
26
31
0
30 Apr 2022
Analyzing the Effects of Handling Data Imbalance on Learned Features
  from Medical Images by Looking Into the Models
Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models
Ashkan Khakzar
Yawei Li
Yang Zhang
Mirac Sanisoglu
Seong Tae Kim
Mina Rezaei
Bernd Bischl
Nassir Navab
27
0
0
04 Apr 2022
A Typology for Exploring the Mitigation of Shortcut Behavior
A Typology for Exploring the Mitigation of Shortcut Behavior
Felix Friedrich
Wolfgang Stammer
P. Schramowski
Kristian Kersting
LLMAG
18
9
0
04 Mar 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
39
77
0
25 Jan 2022
Previous
123
Next