Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.01933
Cited By
v1
v2
v3 (latest)
A Survey Of Methods For Explaining Black Box Models
6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Survey Of Methods For Explaining Black Box Models"
50 / 1,104 papers shown
Title
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
125
81
0
17 May 2019
An Information Theoretic Interpretation to Deep Neural Networks
Shao-Lun Huang
Xiangxiang Xu
Lizhong Zheng
G. Wornell
FAtt
90
44
0
16 May 2019
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Jessica Morley
Luciano Floridi
Libby Kinsey
Anat Elhalal
83
57
0
15 May 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
97
401
0
13 May 2019
Property Inference for Deep Neural Networks
D. Gopinath
Hayes Converse
C. Păsăreanu
Ankur Taly
59
8
0
29 Apr 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
21
0
0
21 Apr 2019
"Why did you do that?": Explaining black box models with Inductive Synthesis
Görkem Paçaci
David Johnson
S. McKeever
A. Hamfelt
35
6
0
17 Apr 2019
Explainability in Human-Agent Systems
A. Rosenfeld
A. Richardson
XAI
83
207
0
17 Apr 2019
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
FAtt
54
60
0
08 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
124
83
0
06 Apr 2019
An Attentive Survey of Attention Models
S. Chaudhari
Varun Mithal
Gungor Polatkan
R. Ramanath
192
666
0
05 Apr 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
163
1,336
0
10 Mar 2019
Challenges for an Ontology of Artificial Intelligence
Scott H. Hawley
23
11
0
25 Feb 2019
Significance Tests for Neural Networks
Enguerrand Horel
K. Giesecke
57
56
0
16 Feb 2019
RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter
Michele Mazza
S. Cresci
Marco Avvenuti
Walter Quattrociocchi
Maurizio Tesconi
64
197
0
12 Feb 2019
Assessing the Local Interpretability of Machine Learning Models
Dylan Slack
Sorelle A. Friedler
C. Scheidegger
Chitradeep Dutta Roy
FAtt
60
71
0
09 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
126
205
0
06 Feb 2019
Attention in Natural Language Processing
Andrea Galassi
Marco Lippi
Paolo Torroni
GNN
73
481
0
04 Feb 2019
Interpreting Deep Neural Networks Through Variable Importance
J. Ish-Horowicz
Dana Udwin
Seth Flaxman
Sarah Filippi
Lorin Crawford
FAtt
52
14
0
28 Jan 2019
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
70
148
0
28 Jan 2019
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAI
HAI
211
1,457
0
14 Jan 2019
Personalized explanation in machine learning: A conceptualization
J. Schneider
J. Handali
XAI
FAtt
80
17
0
03 Jan 2019
LEAFAGE: Example-based and Feature importance-based Explanationsfor Black-box ML models
Ajaya Adhikari
David Tax
R. Satta
M. Faeth
FAtt
111
11
0
21 Dec 2018
Interpretable preference learning: a game theoretic framework for large margin on-line feature and rule learning
Mirko Polato
F. Aiolli
FAtt
19
8
0
19 Dec 2018
An Interpretable Model with Globally Consistent Explanations for Credit Risk
Chaofan Chen
Kangcheng Lin
Cynthia Rudin
Yaron Shaposhnik
Sijia Wang
Tong Wang
FAtt
87
94
0
30 Nov 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
122
102
0
28 Nov 2018
Detecting Token Systems on Ethereum
Michael Fröwis
A. Fuchs
Rainer Böhme
131
50
0
28 Nov 2018
What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems
O. Lahav
Nicholas Mastronarde
M. Schaar
64
30
0
27 Nov 2018
Interpretable Credit Application Predictions With Counterfactual Explanations
Rory Mc Grath
Luca Costabello
Chan Le Van
Paul Sweeney
F. Kamiab
Zhao Shen
Freddy Lecue
FAtt
81
109
0
13 Nov 2018
YASENN: Explaining Neural Networks via Partitioning Activation Sequences
Yaroslav Zharov
Denis Korzhenkov
J. Lyu
Alexander Tuzhilin
FAtt
AAML
23
6
0
07 Nov 2018
Deep Weighted Averaging Classifiers
Dallas Card
Michael J.Q. Zhang
Hao Tang
94
41
0
06 Nov 2018
Semantic bottleneck for computer vision tasks
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
92
17
0
06 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
69
47
0
02 Nov 2018
On The Stability of Interpretable Models
Riccardo Guidotti
Salvatore Ruggieri
FAtt
64
10
0
22 Oct 2018
Concise Explanations of Neural Networks using Adversarial Training
P. Chalasani
Jiefeng Chen
Aravind Sadagopan
S. Jha
Xi Wu
AAML
FAtt
162
13
0
15 Oct 2018
Explaining Black Boxes on Sequential Data using Weighted Automata
Stéphane Ayache
Rémi Eyraud
Noé Goudian
69
44
0
12 Oct 2018
On the Art and Science of Machine Learning Explanations
Patrick Hall
FAtt
XAI
92
30
0
05 Oct 2018
A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees
Klaus Broelemann
Gjergji Kasneci
84
20
0
25 Sep 2018
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton
Qiaozhu Mei
Paul Resnick
FAtt
AAML
124
34
0
01 Sep 2018
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
104
70
0
05 Aug 2018
Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences
J. V. D. Waa
J. Diggelen
K. Bosch
Mark Antonius Neerincx
OffRL
73
109
0
23 Jul 2018
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
D. Pedreschi
F. Giannotti
Riccardo Guidotti
A. Monreale
Luca Pappalardo
Salvatore Ruggieri
Franco Turini
114
38
0
26 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
143
166
0
20 Jun 2018
Defining Locality for Surrogates in Post-hoc Interpretablity
Thibault Laugel
X. Renard
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
FAtt
94
80
0
19 Jun 2018
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
79
82
0
19 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
124
1,869
0
31 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
144
440
0
28 May 2018
Faithfully Explaining Rankings in a News Recommender System
Maartje ter Hoeve
Anne Schuth
Daan Odijk
Maarten de Rijke
OffRL
43
24
0
14 May 2018
Disentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World
Yoshihide Sawada
DRL
68
10
0
19 Apr 2018
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
106
105
0
09 Apr 2018
Previous
1
2
3
...
21
22
23
Next