Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.03433
Cited By
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
7 September 2022
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Responsibility: An Example-based Explainable AI approach via Training Process Inspection"
3 / 3 papers shown
Title
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
45
6
0
03 Apr 2024
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
254
3,684
0
28 Feb 2017
1