Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.08608
Cited By
Towards A Rigorous Science of Interpretable Machine Learning
28 February 2017
Finale Doshi-Velez
Been Kim
XAI
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards A Rigorous Science of Interpretable Machine Learning"
50 / 403 papers shown
Title
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Ashley Suh
G. Appleby
Erik W. Anderson
Luca A. Finelli
Remco Chang
Dylan Cashman
27
8
0
11 May 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
27
17
0
06 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
35
77
0
06 May 2022
Designing for Responsible Trust in AI Systems: A Communication Perspective
Q. V. Liao
S. Sundar
17
99
0
29 Apr 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
24
14
0
25 Apr 2022
Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
Yuri Nakao
Simone Stumpf
Subeida Ahmed
A. Naseer
Lorenzo Strappelli
13
34
0
22 Apr 2022
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
25
22
0
21 Apr 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
13
6
0
21 Apr 2022
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes
Kaige Xie
Sarah Wiegreffe
Mark O. Riedl
ReLM
16
12
0
16 Apr 2022
ProtoTEx: Explaining Model Decisions with Prototype Tensors
Anubrata Das
Chitrank Gupta
Venelin Kovatchev
Matthew Lease
J. Li
24
26
0
11 Apr 2022
Measuring AI Systems Beyond Accuracy
Violet Turri
R. Dzombak
Eric T. Heim
Nathan M. VanHoudnos
Jay Palat
Anusha Sinha
22
0
0
07 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
32
10
0
31 Mar 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
22
20
0
21 Mar 2022
Interpretability for Language Learners Using Example-Based Grammatical Error Correction
Masahiro Kaneko
Sho Takase
Ayana Niwa
Naoaki Okazaki
33
26
0
14 Mar 2022
Symbolic Learning to Optimize: Towards Interpretability and Scalability
Wenqing Zheng
Tianlong Chen
Ting-Kuei Hu
Zhangyang Wang
37
18
0
13 Mar 2022
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
20
1
0
07 Mar 2022
Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks
Nicola Garau
N. Bisagno
Zeno Sambugaro
Nicola Conci
16
21
0
07 Mar 2022
Interpretable Off-Policy Learning via Hyperbox Search
D. Tschernutter
Tobias Hatt
Stefan Feuerriegel
OffRL
CML
42
5
0
04 Mar 2022
Sparse Bayesian Optimization
Sulin Liu
Qing Feng
David Eriksson
Benjamin Letham
E. Bakshy
25
7
0
03 Mar 2022
Reinforcement Learning in Practice: Opportunities and Challenges
Yuxi Li
OffRL
34
9
0
23 Feb 2022
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
16
22
0
22 Feb 2022
Interpreting Language Models with Contrastive Explanations
Kayo Yin
Graham Neubig
MILM
15
77
0
21 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
20
41
0
15 Feb 2022
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
Max W. Shen
25
18
0
10 Feb 2022
Evaluation Methods and Measures for Causal Learning Algorithms
Lu Cheng
Ruocheng Guo
Raha Moraffah
Paras Sheth
K. S. Candan
Huan Liu
CML
ELM
21
50
0
07 Feb 2022
Learning Interpretable, High-Performing Policies for Autonomous Driving
Rohan R. Paleja
Yaru Niu
Andrew Silva
Chace Ritchie
Sugju Choi
Matthew C. Gombolay
16
16
0
04 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
174
185
0
03 Feb 2022
Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods
Abhineet Agarwal
Yan Shuo Tan
Omer Ronen
Chandan Singh
Bin-Xia Yu
65
27
0
02 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
17
1
0
30 Jan 2022
Black-box Error Diagnosis in Deep Neural Networks for Computer Vision: a Survey of Tools
Piero Fraternali
Federico Milani
Rocio Nahime Torres
Niccolò Zangrando
AAML
25
9
0
17 Jan 2022
An Accelerator for Rule Induction in Fuzzy Rough Theory
Suyun Zhao
Zhi-Gang Dai
Xizhao Wang
Peng Ni
Hengheng Luo
Hong Chen
Cuiping Li
11
11
0
07 Jan 2022
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
31
20
0
29 Dec 2021
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box
David Dandolo
Chiara Masiero
Mattia Carletti
Davide Dalle Pezze
Gian Antonio Susto
FAtt
LRM
22
22
0
23 Dec 2021
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
30
38
0
17 Dec 2021
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
36
41
0
16 Dec 2021
Interpretable Design of Reservoir Computing Networks using Realization Theory
Wei Miao
Vignesh Narayanan
Jr-Shin Li
35
5
0
13 Dec 2021
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
13
5
0
09 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
31
55
0
05 Dec 2021
Learning Optimal Predictive Checklists
Haoran Zhang
Q. Morris
Berk Ustun
Marzyeh Ghassemi
18
11
0
02 Dec 2021
On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System
Helen Jiang
Erwen Senge
25
7
0
02 Dec 2021
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
Waddah Saeed
C. Omlin
XAI
36
414
0
11 Nov 2021
Counterfactual Explanations for Models of Code
Jürgen Cito
Işıl Dillig
V. Murali
S. Chandra
AAML
LRM
24
47
0
10 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
24
23
0
09 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
13
301
0
01 Nov 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
96
35
0
15 Oct 2021
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings
Jan Macdonald
Mathieu Besançon
S. Pokutta
27
11
0
15 Oct 2021
Can Explanations Be Useful for Calibrating Black Box Models?
Xi Ye
Greg Durrett
FAtt
19
25
0
14 Oct 2021
Clustering-Based Interpretation of Deep ReLU Network
Nicola Picchiotti
Marco Gori
FAtt
15
0
0
13 Oct 2021
A Survey on Legal Question Answering Systems
J. Martinez-Gil
AILaw
ELM
25
26
0
12 Oct 2021
Previous
1
2
3
4
5
6
7
8
9
Next