ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.11584
  4. Cited By
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations

Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations

20 October 2022
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
    ELM
ArXivPDFHTML

Papers citing "Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations"

47 / 47 papers shown
Title
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
31
0
0
02 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality
Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality
Jorge de Heuvel
Sebastian Müller
Marlene Wessels
Aftab Akhtar
Christian Bauckhage
Maren Bennewitz
49
0
0
01 Apr 2025
Which LIME should I trust? Concepts, Challenges, and Solutions
Which LIME should I trust? Concepts, Challenges, and Solutions
Patrick Knab
Sascha Marton
Udo Schlegel
Christian Bartelt
FAtt
40
0
0
31 Mar 2025
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity Quantification
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity Quantification
Wenlong Yu
Qilong Wang
Chuang Liu
Dong Li
Q. Hu
LRM
60
0
0
19 Mar 2025
Enhancing Job Salary Prediction with Disentangled Composition Effect Modeling: A Neural Prototyping Approach
Enhancing Job Salary Prediction with Disentangled Composition Effect Modeling: A Neural Prototyping Approach
Yang Ji
Ying Sun
Hengshu Zhu
46
0
0
17 Mar 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
48
0
0
24 Feb 2025
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Gaole He
Nilay Aishwarya
U. Gadiraju
38
6
0
29 Jan 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
59
4
0
10 Jan 2025
A Review of Multimodal Explainable Artificial Intelligence: Past,
  Present and Future
A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future
Shilin Sun
Wenbin An
Feng Tian
Fang Nan
Qidong Liu
J. Liu
N. Shah
Ping Chen
93
2
0
18 Dec 2024
GPT for Games: An Updated Scoping Review (2020-2024)
GPT for Games: An Updated Scoping Review (2020-2024)
Daijin Yang
Erica Kleinman
Casper Harteveld
LLMAG
AI4TS
AI4CE
48
3
0
01 Nov 2024
Evaluating Explanations Through LLMs: Beyond Traditional User Studies
Evaluating Explanations Through LLMs: Beyond Traditional User Studies
Francesco Bombassei De Bona
Gabriele Dominici
Tim Miller
Marc Langheinrich
M. Gjoreski
22
4
0
23 Oct 2024
shapiq: Shapley Interactions for Machine Learning
shapiq: Shapley Interactions for Machine Learning
Maximilian Muschalik
Hubert Baniecki
Fabian Fumagalli
Patrick Kolpaczki
Barbara Hammer
Eyke Hüllermeier
TDI
27
9
0
02 Oct 2024
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
374
2
0
25 Sep 2024
A User Study on Contrastive Explanations for Multi-Effector Temporal
  Planning with Non-Stationary Costs
A User Study on Contrastive Explanations for Multi-Effector Temporal Planning with Non-Stationary Costs
Xiaowei Liu
Kevin McAreavey
Weiru Liu
35
0
0
20 Sep 2024
Confident Teacher, Confident Student? A Novel User Study Design for
  Investigating the Didactic Potential of Explanations and their Impact on
  Uncertainty
Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty
Teodor Chiaburu
Frank Haußer
Felix Bießmann
30
0
0
10 Sep 2024
Aggregated Attributions for Explanatory Analysis of 3D Segmentation
  Models
Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models
Maciej Chrabaszcz
Hubert Baniecki
Piotr Komorowski
Szymon Płotka
Przemysław Biecek
31
1
0
23 Jul 2024
XEdgeAI: A Human-centered Industrial Inspection Framework with
  Data-centric Explainable Edge AI Approach
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach
Truong Thanh Hung Nguyen
Phuc Truong Loc Nguyen
Hung Cao
24
2
0
16 Jul 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
46
3
0
26 Jun 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
  Distillation
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
50
0
0
25 Jun 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
42
0
0
17 Jun 2024
Classification Metrics for Image Explanations: Towards Building Reliable
  XAI-Evaluations
Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations
Benjamin Frész
Lena Lörcher
Marco F. Huber
20
1
0
07 Jun 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
40
0
0
31 May 2024
Selective Explanations
Selective Explanations
Lucas Monteiro Paes
Dennis L. Wei
Flavio du Pin Calmon
FAtt
32
0
0
29 May 2024
Data Science Principles for Interpretable and Explainable AI
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
38
0
0
17 May 2024
Faithful Attention Explainer: Verbalizing Decisions Based on
  Discriminative Features
Faithful Attention Explainer: Verbalizing Decisions Based on Discriminative Features
Yao Rong
David Scheerer
Enkelejda Kasneci
45
0
0
16 May 2024
The Drawback of Insight: Detailed Explanations Can Reduce Agreement with
  XAI
The Drawback of Insight: Detailed Explanations Can Reduce Agreement with XAI
Sabid Bin Habib Pias
Alicia Freel
Timothy Trammel
Taslima Akter
Donald Williamson
Apu Kapadia
25
2
0
30 Apr 2024
Evaluating the Explainability of Attributes and Prototypes for a Medical
  Classification Model
Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model
Luisa Gallée
C. Lisson
C. Lisson
Daniela Drees
Felix Weig
D. Vogele
Meinrad Beer
Michael Götz
24
0
0
15 Apr 2024
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
39
6
0
03 Apr 2024
Is my Data in your AI Model? Membership Inference Test with Application
  to Face Images
Is my Data in your AI Model? Membership Inference Test with Application to Face Images
Daniel DeAlcala
Aythami Morales
Gonzalo Mancera
Julian Fierrez
Ruben Tolosana
J. Ortega-Garcia
CVBM
26
7
0
14 Feb 2024
Explainable Predictive Maintenance: A Survey of Current Methods,
  Challenges and Opportunities
Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and Opportunities
Logan Cummins
Alexander Sommers
Somayeh Bakhtiari Ramezani
Sudip Mittal
Joseph E. Jabour
Maria Seale
Shahram Rahimi
32
21
0
15 Jan 2024
I-CEE: Tailoring Explanations of Image Classification Models to User
  Expertise
I-CEE: Tailoring Explanations of Image Classification Models to User Expertise
Yao Rong
Peizhu Qian
Vaibhav Unhelkar
Enkelejda Kasneci
34
0
0
19 Dec 2023
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
22
3
0
21 Nov 2023
Tell Me a Story! Narrative-Driven XAI with Large Language Models
Tell Me a Story! Narrative-Driven XAI with Large Language Models
David Martens
James Hinns
Camille Dams
Mark Vergouwen
Theodoros Evgeniou
13
4
0
29 Sep 2023
Human Attention-Guided Explainable Artificial Intelligence for Computer
  Vision Models
Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models
Guoyang Liu
Jindi Zhang
Antoni B. Chan
J. H. Hsiao
24
14
0
05 May 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
289
3,003
0
22 Mar 2023
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
Rohan R. Paleja
Muyleng Ghuy
Nadun R. Arachchige
Reed Jensen
Matthew C. Gombolay
70
63
0
08 Sep 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAI
FAtt
49
16
0
17 May 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
355
8,457
0
28 Jan 2022
Human Interpretation of Saliency-based Explanation Over Text
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
39
0
27 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
34
25
0
17 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction
Evaluating the Interpretability of Generative Models by Interactive Reconstruction
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
103
49
0
02 Feb 2021
Artificial Intelligence Methods in In-Cabin Use Cases: A Survey
Artificial Intelligence Methods in In-Cabin Use Cases: A Survey
Yao Rong
Chao Han
Christian Hellert
Antje Loyal
Enkelejda Kasneci
22
16
0
06 Jan 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1