ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.06254
  4. Cited By
Interpretable Machine Learning: Moving From Mythos to Diagnostics

Interpretable Machine Learning: Moving From Mythos to Diagnostics

10 March 2021
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
ArXivPDFHTML

Papers citing "Interpretable Machine Learning: Moving From Mythos to Diagnostics"

13 / 13 papers shown
Title
Beyond Model Interpretability: Socio-Structural Explanations in Machine
  Learning
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning
Andrew Smart
Atoosa Kasirzadeh
30
6
0
05 Sep 2024
Biathlon: Harnessing Model Resilience for Accelerating ML Inference
  Pipelines
Biathlon: Harnessing Model Resilience for Accelerating ML Inference Pipelines
Chaokun Chang
Eric Lo
Chunxiao Ye
21
2
0
18 May 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
34
11
0
19 Mar 2024
Guidelines for Integrating Value Sensitive Design in Responsible AI
  Toolkits
Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits
Malak Sadek
Marios Constantinides
Daniele Quercia
C. Mougenot
35
14
0
29 Feb 2024
Designerly Understanding: Information Needs for Model Transparency to
  Support Design Ideation for AI-Powered User Experience
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Q. V. Liao
Hariharan Subramonyam
Jennifer Wang
Jennifer Wortman Vaughan
HAI
25
58
0
21 Feb 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
18
43
0
23 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
38
104
0
18 Jan 2023
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
30
11
0
13 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
77
0
06 May 2022
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
  Local Explanations
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
Yiwei Lyu
Paul Pu Liang
Zihao Deng
Ruslan Salakhutdinov
Louis-Philippe Morency
16
30
0
03 Mar 2022
Explainable Machine Learning for Public Policy: Use Cases, Gaps, and
  Research Directions
Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions
Kasun Amarasinghe
Kit Rodolfa
Hemank Lamba
Rayid Ghani
ELM
XAI
23
51
0
27 Oct 2020
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
104
44
0
11 Jun 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1