ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.11440
  4. Cited By
Human Factors in Model Interpretability: Industry Practices, Challenges,
  and Needs

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

23 April 2020
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
    HAI
ArXivPDFHTML

Papers citing "Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs"

32 / 32 papers shown
Title
SPHERE: An Evaluation Card for Human-AI Systems
SPHERE: An Evaluation Card for Human-AI Systems
Qianou Ma
Dora Zhao
Xinran Zhao
Chenglei Si
Chenyang Yang
Ryan Louie
Ehud Reiter
Diyi Yang
Tongshuang Wu
ALM
103
1
0
24 Mar 2025
Rethinking Model Evaluation as Narrowing the Socio-Technical Gap
Rethinking Model Evaluation as Narrowing the Socio-Technical Gap
Q. V. Liao
Ziang Xiao
ALM
ELM
96
32
0
01 Jun 2023
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Ashley Suh
G. Appleby
Erik W. Anderson
Luca A. Finelli
Remco Chang
Dylan Cashman
95
8
0
11 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
246
195
0
03 Feb 2022
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
109
74
0
18 Jan 2021
How do Data Science Workers Collaborate? Roles, Workflows, and Tools
How do Data Science Workers Collaborate? Roles, Workflows, and Tools
Amy X. Zhang
Michael J. Muller
Dakuo Wang
FedML
AI4CE
67
259
0
18 Jan 2020
Human-AI Collaboration in Data Science: Exploring Data Scientists'
  Perceptions of Automated AI
Human-AI Collaboration in Data Science: Exploring Data Scientists' Perceptions of Automated AI
Dakuo Wang
Justin D. Weisz
Michael J. Muller
Parikshit Ram
Werner Geyer
Casey Dugan
Y. Tausczik
Horst Samulowitz
Alexander G. Gray
209
313
0
05 Sep 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
50
113
0
20 Jan 2019
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
245
760
0
13 Dec 2018
Automatically Explaining Machine Learning Prediction Results: A
  Demonstration on Type 2 Diabetes Risk Prediction
Automatically Explaining Machine Learning Prediction Results: A Demonstration on Type 2 Diabetes Risk Prediction
G. Luo
30
85
0
06 Dec 2018
The Effect of Heterogeneous Data for Alzheimer's Disease Detection from
  Speech
The Effect of Heterogeneous Data for Alzheimer's Disease Detection from Speech
Aparna Balagopalan
Jekaterina Novikova
Frank Rudzicz
Marzyeh Ghassemi
49
21
0
29 Nov 2018
Towards Explainable Deep Learning for Credit Lending: A Case Study
Towards Explainable Deep Learning for Credit Lending: A Case Study
C. Modarres
Mark Ibrahim
Melissa Louie
John Paisley
FaML
349
20
0
15 Nov 2018
Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of
  Machine Learning Models
Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models
Jiawei Zhang
Yang Wang
Piero Molino
Lezhi Li
D. Ebert
FAtt
48
204
0
01 Aug 2018
RuleMatrix: Visualizing and Understanding Classifiers with Rules
RuleMatrix: Visualizing and Understanding Classifiers with Rules
Yao Ming
Huamin Qu
E. Bertini
FAtt
62
215
0
17 Jul 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
93
1,857
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
77
121
0
29 May 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
86
698
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,961
0
06 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
104
242
0
02 Feb 2018
Direct-Manipulation Visualization of Deep Networks
Direct-Manipulation Visualization of Deep Networks
D. Smilkov
Shan Carter
D. Sculley
F. Viégas
Martin Wattenberg
FAtt
AI4CE
53
140
0
12 Aug 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
242
4,265
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,906
0
22 May 2017
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
Minsuk Kahng
Pierre Yves Andrews
Aditya Kalro
Duen Horng Chau
HAI
69
324
0
06 Apr 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
399
3,798
0
28 Feb 2017
Improving Human-Machine Cooperative Visual Search With Soft Highlighting
Improving Human-Machine Cooperative Visual Search With Soft Highlighting
R. T. Kneusel
Michael C. Mozer
46
26
0
24 Dec 2016
Detecting Dependencies in Sparse, Multivariate Databases Using
  Probabilistic Programming and Non-parametric Bayes
Detecting Dependencies in Sparse, Multivariate Databases Using Probabilistic Programming and Non-parametric Bayes
Feras A. Saad
Vikash K. Mansinghka
30
14
0
05 Nov 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,701
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,976
0
16 Feb 2016
BayesDB: A probabilistic programming system for querying the probable
  implications of data
BayesDB: A probabilistic programming system for querying the probable implications of data
Vikash K. Mansinghka
R. Tibbetts
Jay Baxter
Pat Shafto
Baxter S. Eaves
46
38
0
15 Dec 2015
Interpretable classifiers using rules and Bayesian analysis: Building a
  better stroke prediction model
Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
Benjamin Letham
Cynthia Rudin
Tyler H. McCormick
D. Madigan
FAtt
65
743
0
05 Nov 2015
Monotonic Calibrated Interpolated Look-Up Tables
Monotonic Calibrated Interpolated Look-Up Tables
Maya R. Gupta
Andrew Cotter
Jan Pfeifer
Konstantin Voevodski
K. Canini
Alexander Mangylov
Wojtek Moczydlowski
A. V. Esbroeck
189
128
0
23 May 2015
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,295
0
20 Dec 2013
1