ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.04719
  4. Cited By
Expanding Explainability: Towards Social Transparency in AI systems

Expanding Explainability: Towards Social Transparency in AI systems

12 January 2021
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
ArXiv (abs)PDFHTML

Papers citing "Expanding Explainability: Towards Social Transparency in AI systems"

31 / 31 papers shown
Title
Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest
Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest
Jakob Schoeffer
Maria De-Arteaga
Jonathan Elmer
417
0
0
05 Apr 2025
Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support
Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support
Kevin Pu
Daniel Lazaro
Ian Arawjo
Haijun Xia
Ziang Xiao
Tovi Grossman
Yan Chen
115
5
0
25 Feb 2025
Revisiting Rogers' Paradox in the Context of Human-AI Interaction
Revisiting Rogers' Paradox in the Context of Human-AI Interaction
Katherine M. Collins
Umang Bhatt
Ilia Sucholutsky
127
1
0
16 Jan 2025
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
148
1
0
09 Oct 2024
ShapG: new feature importance method based on the Shapley value
ShapG: new feature importance method based on the Shapley value
Chi Zhao
Jing Liu
Elena Parilina
FAtt
253
4
0
29 Jun 2024
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Md. Tanzib Hosain
Md. Mehedi Hasan Anik
Sadman Rafi̇
Rana Tabassum
Khaleque Insi̇a
Md. Mehrab Siddiky
40
7
0
13 Oct 2023
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling
Haotian Li
Yun Wang
Q. V. Liao
Huamin Qu
117
23
0
17 Apr 2023
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in
  Artificial Intelligence
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
Shakir Mohamed
Marie-Therese Png
William S. Isaac
90
406
0
08 Jul 2020
Human Factors in Model Interpretability: Industry Practices, Challenges,
  and Needs
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
73
194
0
23 Apr 2020
Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
B. Shneiderman
58
701
0
10 Feb 2020
Human-centered Explainable AI: Towards a Reflective Sociotechnical
  Approach
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
Upol Ehsan
Mark O. Riedl
55
217
0
04 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAMLFAttXAI
61
200
0
03 Feb 2020
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
  Explainable AI Systems
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
73
284
0
22 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
78
141
0
14 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
127
720
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
83
677
0
07 Jan 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
121
6,269
0
22 Oct 2019
What does it mean to solve the problem of discrimination in hiring?
  Social, technical and legal perspectives from the UK on automated hiring
  systems
What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems
Javier Sánchez-Monedero
L. Dencik
L. Edwards
72
136
0
28 Sep 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
67
393
0
06 Sep 2019
explAIner: A Visual Analytics Framework for Interactive and Explainable
  Machine Learning
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
Thilo Spinner
U. Schlegel
H. Schäfer
Mennatallah El-Assady
HAI
65
237
0
29 Jul 2019
Unremarkable AI: Fitting Intelligent Decision Support into Critical,
  Clinical Decision-Making Processes
Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes
Qian Yang
Aaron Steinfeld
John Zimmerman
49
236
0
21 Apr 2019
Automated Rationale Generation: A Technique for Explainable AI and its
  Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Upol Ehsan
Pradyumna Tambwekar
Larry Chan
Brent Harrison
Mark O. Riedl
97
243
0
11 Jan 2019
TED: Teaching AI to Explain its Decisions
TED: Teaching AI to Explain its Decisions
Michael Hind
Dennis L. Wei
Murray Campbell
Noel Codella
Amit Dhurandhar
Aleksandra Mojsilović
Karthikeyan N. Ramamurthy
Kush R. Varshney
56
110
0
12 Nov 2018
Explaining Explanations in AI
Explaining Explanations in AI
Brent Mittelstadt
Chris Russell
Sandra Wachter
XAI
99
667
0
04 Nov 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
95
1,862
0
31 May 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
102
219
0
20 Mar 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
56
244
0
09 Mar 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
91
698
0
21 Feb 2018
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
247
4,265
0
22 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
402
3,798
0
28 Feb 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,701
0
10 Jun 2016
1