ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.03735
  4. Cited By
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

2 October 2022
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
ArXivPDFHTML

Papers citing ""Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction"

29 / 29 papers shown
Title
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
166
1
0
14 Apr 2025
Are we measuring trust correctly in explainability, interpretability,
  and transparency research?
Are we measuring trust correctly in explainability, interpretability, and transparency research?
Tim Miller
91
23
0
31 Aug 2022
Neural Basis Models for Interpretability
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
83
47
0
27 May 2022
Who Goes First? Influences of Human-AI Workflow on Decision Making in
  Clinical Imaging
Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging
Riccardo Fogliato
Shreya Chappidi
M. Lungren
Michael Fitzke
Mark Parkinson
Diane U Wilson
Paul Fisher
Eric Horvitz
K. Inkpen
Besmira Nushi
42
70
0
19 May 2022
StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
  Interactive Storytelling with Flexible Parental Involvement
StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child Interactive Storytelling with Flexible Parental Involvement
Zheng Zhang
Ying Xu
Yanhao Wang
Bingsheng Yao
Daniel E. Ritchie
Tongshuang Wu
Mo Yu
Dakuo Wang
Toby Jia-Jun Li
31
119
0
13 Feb 2022
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
  Exploring Language Model Capabilities
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Mina Lee
Percy Liang
Qian Yang
HAI
62
370
0
18 Jan 2022
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
38
63
0
05 May 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
126
662
0
20 Mar 2021
Unbox the Black-box for the Medical Explainable AI via Multi-modal and
  Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Guang Yang
Qinghao Ye
Jun Xia
106
489
0
03 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
58
129
0
24 Jan 2021
How Useful Are the Machine-Generated Interpretations to General Users? A
  Human Evaluation on Guessing the Incorrectly Predicted Labels
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
41
56
0
26 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
46
56
0
14 Aug 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
89
807
0
09 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
68
588
0
26 Jun 2020
Explainable deep learning models in medical image analysis
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
69
486
0
28 May 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
47
217
0
09 Mar 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
93
709
0
08 Jan 2020
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
59
389
0
13 May 2019
Counterfactual Visual Explanations
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
64
510
0
16 Apr 2019
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
136
1,164
0
19 Jun 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
90
3,922
0
06 Feb 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
56
263
0
10 Jan 2018
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TS
SyDa
46
373
0
02 Dec 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
227
4,229
0
22 Jun 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
55
1,514
0
11 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
142
2,854
0
14 Mar 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
681
16,828
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
186
9,280
0
14 Dec 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
376
15,825
0
12 Nov 2013
1