ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.12481
  4. Cited By
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
v1v2 (latest)

Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI

21 November 2023
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
ArXiv (abs)PDFHTML

Papers citing "Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI"

17 / 17 papers shown
Title
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious
  Correlation
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
67
87
0
09 Dec 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
70
115
0
17 Nov 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
79
6
0
21 Apr 2022
Human Attention in Fine-grained Classification
Human Attention in Fine-grained Classification
Yao Rong
Wenjia Xu
Zeynep Akata
Enkelejda Kasneci
88
37
0
02 Nov 2021
Crowdsourcing Evaluation of Saliency-based XAI Methods
Crowdsourcing Evaluation of Saliency-based XAI Methods
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
XAIFAtt
41
14
0
27 Jun 2021
Matching Representations of Explainable Artificial Intelligence and Eye
  Gaze for Human-Machine Interaction
Matching Representations of Explainable Artificial Intelligence and Eye Gaze for Human-Machine Interaction
Tiffany Hwu
Mia Levy
Steven Skorheim
David J. Huber
30
6
0
30 Jan 2021
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
111
74
0
18 Jan 2021
How Useful Are the Machine-Generated Interpretations to General Users? A
  Human Evaluation on Guessing the Incorrectly Predicted Labels
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAttHAI
60
56
0
26 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
75
161
0
11 Aug 2020
Interpreting Interpretations: Organizing Attribution Methods by Criteria
Interpreting Interpretations: Organizing Attribution Methods by Criteria
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
Matt Fredrikson
XAIFAtt
41
17
0
19 Feb 2020
Sanity Checks for Saliency Metrics
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
100
170
0
29 Nov 2019
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAttAI4TS
53
79
0
25 Sep 2019
XRAI: Better Attributions Through Regions
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAttXAI
59
212
0
06 Jun 2019
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
188
6,015
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Human Attention in Visual Question Answering: Do Humans and Deep
  Networks Look at the Same Regions?
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
Abhishek Das
Harsh Agrawal
C. L. Zitnick
Devi Parikh
Dhruv Batra
102
466
0
11 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
1