Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.12481
Cited By
v1
v2 (latest)
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
21 November 2023
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI"
17 / 17 papers shown
Title
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
67
87
0
09 Dec 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
70
115
0
17 Nov 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
79
6
0
21 Apr 2022
Human Attention in Fine-grained Classification
Yao Rong
Wenjia Xu
Zeynep Akata
Enkelejda Kasneci
88
37
0
02 Nov 2021
Crowdsourcing Evaluation of Saliency-based XAI Methods
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
XAI
FAtt
41
14
0
27 Jun 2021
Matching Representations of Explainable Artificial Intelligence and Eye Gaze for Human-Machine Interaction
Tiffany Hwu
Mia Levy
Steven Skorheim
David J. Huber
30
6
0
30 Jan 2021
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
111
74
0
18 Jan 2021
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
60
56
0
26 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
75
161
0
11 Aug 2020
Interpreting Interpretations: Organizing Attribution Methods by Criteria
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
Matt Fredrikson
XAI
FAtt
41
17
0
19 Feb 2020
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
100
170
0
29 Nov 2019
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAtt
AI4TS
53
79
0
25 Sep 2019
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAtt
XAI
59
212
0
06 Jun 2019
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
6,015
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
Abhishek Das
Harsh Agrawal
C. L. Zitnick
Devi Parikh
Dhruv Batra
102
466
0
11 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,027
0
16 Feb 2016
1