Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.07810
Cited By
Manipulating and Measuring Model Interpretability
21 February 2018
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Manipulating and Measuring Model Interpretability"
50 / 114 papers shown
Title
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
Max W. Shen
27
18
0
10 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
194
186
0
03 Feb 2022
Framework for Evaluating Faithfulness of Local Explanations
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
121
61
0
01 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
30
1
0
30 Jan 2022
Explainability in Music Recommender Systems
Darius Afchar
Alessandro B. Melchiorre
Markus Schedl
Romain Hennequin
Elena V. Epure
Manuel Moussallam
34
48
0
25 Jan 2022
Towards Relatable Explainable AI with the Perceptual Process
Wencan Zhang
Brian Y. Lim
AAML
XAI
29
62
0
28 Dec 2021
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
33
186
0
21 Dec 2021
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
40
38
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Learning Optimal Predictive Checklists
Haoran Zhang
Q. Morris
Berk Ustun
Marzyeh Ghassemi
26
11
0
02 Dec 2021
Will We Trust What We Don't Understand? Impact of Model Interpretability and Outcome Feedback on Trust in AI
Daehwan Ahn
Abdullah Almaatouq
Monisha Gulabani
K. Hosanagar
29
12
0
16 Nov 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
357
0
04 Oct 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
50
34
0
20 Sep 2021
An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability
Francesco Sovrano
F. Vitali
42
30
0
11 Sep 2021
The Flaws of Policies Requiring Human Oversight of Government Algorithms
Ben Green
21
112
0
10 Sep 2021
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies
Riccardo Fogliato
Alexandra Chouldechova
Zachary Chase Lipton
26
31
0
03 Sep 2021
Contemporary Symbolic Regression Methods and their Relative Performance
William La Cava
Patryk Orzechowski
Bogdan Burlacu
Fabrício Olivetti de Francca
M. Virgolin
Ying Jin
M. Kommenda
J. Moore
62
253
0
29 Jul 2021
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
59
95
0
01 Jul 2021
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
47
31
0
23 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
16
21
0
08 Jun 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
44
79
0
30 Apr 2021
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
56
28
0
27 Apr 2021
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News
Ashkan Kazemi
Zehua Li
Verónica Pérez-Rosas
Rada Mihalcea
34
14
0
27 Apr 2021
Model Learning with Personalized Interpretability Estimation (ML-PIE)
M. Virgolin
A. D. Lorenzo
Francesca Randone
Eric Medvet
M. Wahde
24
30
0
13 Apr 2021
Designing for human-AI complementarity in K-12 education
Kenneth Holstein
V. Aleven
HAI
11
64
0
02 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
59
655
0
20 Mar 2021
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
214
0
09 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
29
146
0
26 Feb 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
31
57
0
25 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
43
394
0
12 Jan 2021
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
31
10
0
21 Dec 2020
Learning how to approve updates to machine learning algorithms in non-stationary settings
Jean Feng
16
1
0
14 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
34
18
0
10 Dec 2020
Biased TextRank: Unsupervised Graph-Based Content Extraction
Ashkan Kazemi
Verónica Pérez-Rosas
Rada Mihalcea
25
30
0
02 Nov 2020
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
25
72
0
25 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
28
396
0
19 Oct 2020
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
32
109
0
15 Oct 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
29
162
0
11 Aug 2020
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
Sina Mohseni
Fan Yang
Shiva K. Pentyala
Mengnan Du
Yi Liu
Nic Lupfer
Xia Hu
Shuiwang Ji
Eric D. Ragan
21
41
0
24 Jul 2020
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
36
15
0
17 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
584
0
26 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
22
94
0
19 Jun 2020
Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Harini Suresh
Natalie Lao
Ilaria Liccardi
16
49
0
22 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
35
218
0
01 May 2020
The Grammar of Interactive Explanatory Model Analysis
Hubert Baniecki
Dariusz Parzych
P. Biecek
24
44
0
01 May 2020
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
16
191
0
23 Apr 2020
Learning a Formula of Interpretability to Learn Interpretable Formulas
M. Virgolin
A. D. Lorenzo
Eric Medvet
Francesca Randone
25
33
0
23 Apr 2020
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
Dustin L. Arendt
Zhuanyi Huang
Prasha Shrestha
Ellyn Ayton
M. Glenski
Svitlana Volkova
32
8
0
16 Apr 2020
Previous
1
2
3
Next