ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.05408
  4. Cited By
Machine Learning Explainability for External Stakeholders

Machine Learning Explainability for External Stakeholders

10 July 2020
Umang Bhatt
Mckane Andrus
Adrian Weller
Alice Xiang
    FaMLSILM
ArXiv (abs)PDFHTML

Papers citing "Machine Learning Explainability for External Stakeholders"

30 / 30 papers shown
Title
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCVBDL
94
116
0
11 Jun 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
  Predict Model Behavior?
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
79
304
0
04 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
97
226
0
01 May 2020
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
  Explainable AI Systems
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
84
288
0
22 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
94
682
0
07 Jan 2020
ABOUT ML: Annotation and Benchmarking on Understanding and Transparency
  of Machine Learning Lifecycles
ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles
Inioluwa Deborah Raji
Jingyi Yang
90
38
0
12 Dec 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
81
822
0
06 Nov 2019
Addressing Failure Prediction by Learning Model Confidence
Addressing Failure Prediction by Learning Model Confidence
Charles Corbière
Nicolas Thome
Avner Bar-Hen
Matthieu Cord
P. Pérez
122
291
0
01 Oct 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
74
393
0
06 Sep 2019
Why Authors Don't Visualize Uncertainty
Why Authors Don't Visualize Uncertainty
Jessica Hullman
61
126
0
05 Aug 2019
Can You Trust Your Model's Uncertainty? Evaluating Predictive
  Uncertainty Under Dataset Shift
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia
Emily Fertig
Jie Jessie Ren
Zachary Nado
D. Sculley
Sebastian Nowozin
Joshua V. Dillon
Balaji Lakshminarayanan
Jasper Snoek
UQCV
192
1,706
0
06 Jun 2019
Data Science and Digital Systems: The 3Ds of Machine Learning Systems
  Design
Data Science and Digital Systems: The 3Ds of Machine Learning Systems Design
Neil D. Lawrence
AI4CEPINN
55
6
0
26 Mar 2019
On Network Science and Mutual Information for Explaining Deep Neural
  Networks
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedMLSSLFAtt
51
10
0
20 Jan 2019
Representer Point Selection for Explaining Deep Neural Networks
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
94
254
0
23 Nov 2018
Interpreting Black Box Predictions using Fisher Kernels
Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna
Been Kim
Joydeep Ghosh
Oluwasanmi Koyejo
FAtt
85
104
0
23 Oct 2018
Model Cards for Model Reporting
Model Cards for Model Reporting
Margaret Mitchell
Simone Wu
Andrew Zaldivar
Parker Barnes
Lucy Vasserman
Ben Hutchinson
Elena Spitzer
Inioluwa Deborah Raji
Timnit Gebru
146
1,910
0
05 Oct 2018
Actionable Recourse in Linear Classification
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
129
551
0
18 Sep 2018
The Social Cost of Strategic Classification
The Social Cost of Strategic Classification
S. Milli
John Miller
Anca Dragan
Moritz Hardt
51
184
0
25 Aug 2018
Accurate Uncertainties for Deep Learning Using Calibrated Regression
Accurate Uncertainties for Deep Learning Using Calibrated Regression
Volodymyr Kuleshov
Nathan Fenner
Stefano Ermon
BDLUQCV
209
636
0
01 Jul 2018
Datasheets for Datasets
Datasheets for Datasets
Timnit Gebru
Jamie Morgenstern
Briana Vecchione
Jennifer Wortman Vaughan
Hanna M. Wallach
Hal Daumé
Kate Crawford
296
2,201
0
23 Mar 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
110
701
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
129
592
0
21 Feb 2018
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
140
2,371
0
01 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
261
4,287
0
22 Jun 2017
On Calibration of Modern Neural Networks
On Calibration of Modern Neural Networks
Chuan Guo
Geoff Pleiss
Yu Sun
Kilian Q. Weinberger
UQCV
299
5,877
0
14 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,135
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
229
2,910
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
422
3,824
0
28 Feb 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,716
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,092
0
16 Feb 2016
1