Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.08608
Cited By
Towards A Rigorous Science of Interpretable Machine Learning
28 February 2017
Finale Doshi-Velez
Been Kim
XAI
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards A Rigorous Science of Interpretable Machine Learning"
32 / 482 papers shown
Title
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
14
196
0
09 Jul 2018
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
25
40
0
22 Jun 2018
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
16
13
0
22 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
26
164
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
29
933
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
19
82
0
19 Jun 2018
Instance-Level Explanations for Fraud Detection: A Case Study
Dennis Collaris
L. M. Vink
J. V. Wijk
29
31
0
19 Jun 2018
Learning Kolmogorov Models for Binary Random Variables
H. Ghauch
Mikael Skoglund
H. S. Ghadikolaei
Carlo Fischione
Ali H. Sayed
16
7
0
06 Jun 2018
Performance Metric Elicitation from Pairwise Classifier Comparisons
G. Hiranandani
Shant Boodaghians
R. Mehta
Oluwasanmi Koyejo
19
14
0
05 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,840
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
32
120
0
29 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
31
435
0
28 May 2018
Disentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World
Yoshihide Sawada
DRL
21
10
0
19 Apr 2018
Entanglement-guided architectures of machine learning by quantum tensor network
Yuhan Liu
Xiao Zhang
M. Lewenstein
Shi-Ju Ran
23
32
0
24 Mar 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Constant-Time Predictive Distributions for Gaussian Processes
Geoff Pleiss
Jacob R. Gardner
Kilian Q. Weinberger
A. Wilson
25
94
0
16 Mar 2018
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs
Diviyan Kalainathan
Olivier Goudet
Isabelle M Guyon
David Lopez-Paz
Michèle Sebag
CML
24
93
0
13 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
8
502
0
13 Mar 2018
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables
Bojan Kolosnjaji
Ambra Demontis
Battista Biggio
Davide Maiorca
Giorgio Giacinto
Claudia Eckert
Fabio Roli
AAML
19
316
0
12 Mar 2018
Explaining Black-box Android Malware Detection
Marco Melis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
9
43
0
09 Mar 2018
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
26
241
0
09 Mar 2018
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
21
70
0
20 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
33
241
0
02 Feb 2018
Inverse Classification for Comparison-based Interpretability in Machine Learning
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
18
100
0
22 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
77
1,791
0
30 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAtt
AAML
23
12
0
15 Nov 2017
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
15
19
0
12 Jul 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
40
2,204
0
12 Jun 2017
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric P. Xing
CML
35
82
0
29 May 2017
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
24
170
0
23 May 2017
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
41
582
0
10 Mar 2017
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRL
VLM
104
1,502
0
25 Jan 2017
Previous
1
2
3
...
10
8
9