ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.12102
  4. Cited By
I-CEE: Tailoring Explanations of Image Classification Models to User
  Expertise
v1v2 (latest)

I-CEE: Tailoring Explanations of Image Classification Models to User Expertise

19 December 2023
Yao Rong
Peizhu Qian
Vaibhav Unhelkar
Enkelejda Kasneci
ArXiv (abs)PDFHTML

Papers citing "I-CEE: Tailoring Explanations of Image Classification Models to User Expertise"

29 / 29 papers shown
Title
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
89
102
0
20 Oct 2022
Out of One, Many: Using Language Models to Simulate Human Samples
Out of One, Many: Using Language Models to Simulate Human Samples
Lisa P. Argyle
Ethan C. Busby
Nancy Fulda
Joshua R Gubler
Christopher Rytting
David Wingate
SyDa
98
602
0
14 Sep 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAIFAtt
90
17
0
17 May 2022
Imitation Learning by Estimating Expertise of Demonstrators
Imitation Learning by Estimating Expertise of Demonstrators
M. Beliaev
Andy Shih
Stefano Ermon
Dorsa Sadigh
Ramtin Pedarsani
82
49
0
02 Feb 2022
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
76
41
0
17 Dec 2021
Human Attention in Fine-grained Classification
Human Attention in Fine-grained Classification
Yao Rong
Wenjia Xu
Zeynep Akata
Enkelejda Kasneci
88
37
0
02 Nov 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
107
234
0
20 Oct 2021
Question-Driven Design Process for Explainable AI User Experiences
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
110
54
0
08 Apr 2021
Mitigating belief projection in explainable artificial intelligence via
  Bayesian Teaching
Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching
Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
88
43
0
07 Feb 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
89
408
0
12 Jan 2021
Learning Interpretable Concept-Based Models with Human Feedback
Learning Interpretable Concept-Based Models with Human Feedback
Isaac Lage
Finale Doshi-Velez
47
25
0
04 Dec 2020
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
  Explanations of Their Behavior in Natural Language?
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
Peter Hase
Shiyue Zhang
Harry Xie
Joey Tianyi Zhou
72
102
0
08 Oct 2020
A Survey of Deep Active Learning
A Survey of Deep Active Learning
Pengzhen Ren
Yun Xiao
Xiaojun Chang
Po-Yao (Bernie) Huang
Zhihui Li
Brij B. Gupta
Xiaojiang Chen
Xin Wang
120
1,153
0
30 Aug 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
  Predict Model Behavior?
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
77
304
0
04 May 2020
Sanity Checks for Saliency Metrics
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
103
170
0
29 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
135
6,321
0
22 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
303
307
0
17 Oct 2019
Exploring Computational User Models for Agent Policy Summarization
Exploring Computational User Models for Agent Policy Summarization
Isaac Lage
Daphna Lifschitz
Finale Doshi-Velez
Ofra Amir
LLMAG
76
76
0
30 May 2019
Variational Adversarial Active Learning
Variational Adversarial Active Learning
Samarth Sinha
Sayna Ebrahimi
Trevor Darrell
GANDRLVLMSSL
140
579
0
31 Mar 2019
Representer Point Selection for Explaining Deep Neural Networks
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
94
254
0
23 Nov 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
188
1,176
0
19 Jun 2018
Disentangling by Factorising
Disentangling by Factorising
Hyunjik Kim
A. Mnih
CoGeOOD
70
1,356
0
16 Feb 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
227
2,910
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
420
3,824
0
28 Feb 2017
Enabling Robots to Communicate their Objectives
Enabling Robots to Communicate their Objectives
Sandy H. Huang
David Held
Pieter Abbeel
Anca Dragan
80
161
0
11 Feb 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
357
20,136
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.3K
194,641
0
10 Dec 2015
1