Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.01933
Cited By
v1
v2
v3 (latest)
A Survey Of Methods For Explaining Black Box Models
6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Survey Of Methods For Explaining Black Box Models"
50 / 1,104 papers shown
Title
Deep Learning for Android Malware Defenses: a Systematic Literature Review
Yue Liu
Chakkrit Tantithamthavorn
Li Li
Yepang Liu
AAML
88
81
0
09 Mar 2021
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
97
223
0
09 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
165
179
0
07 Mar 2021
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
59
21
0
04 Mar 2021
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
128
46
0
03 Mar 2021
Interpretable Multi-Modal Hate Speech Detection
Prashanth Vijayaraghavan
Hugo Larochelle
D. Roy
48
36
0
02 Mar 2021
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
Miguel Á. Carreira-Perpiñán
Suryabhan Singh Hada
CML
AAML
52
35
0
01 Mar 2021
Visualizing Rule Sets: Exploration and Validation of a Design Space
Jun Yuan
O. Nov
E. Bertini
44
1
0
01 Mar 2021
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence
Atoosa Kasirzadeh
70
24
0
01 Mar 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
123
234
0
25 Feb 2021
HiPaR: Hierarchical Pattern-aided Regression
Luis Galárraga
Olivier Pelgrin
Alexandre Termier
19
1
0
24 Feb 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
93
146
0
24 Feb 2021
DNN2LR: Automatic Feature Crossing for Credit Scoring
Qiang Liu
Zhaocheng Liu
Haoli Zhang
Yuntian Chen
Jun Zhu
33
0
0
24 Feb 2021
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Ginevra Carbone
G. Sanguinetti
Luca Bortolussi
FAtt
AAML
74
4
0
22 Feb 2021
SQAPlanner: Generating Data-Informed Software Quality Improvement Plans
Dilini Sewwandi Rajapaksha
Chakkrit Tantithamthavorn
Jirayus Jiarpakdee
Christoph Bergmeir
J. Grundy
Wray Buntine
80
35
0
19 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
93
26
0
17 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
125
432
0
15 Feb 2021
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
S. Sattarzadeh
M. Sudhakar
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
FAtt
59
39
0
15 Feb 2021
Deep Co-Attention Network for Multi-View Subspace Learning
Lecheng Zheng
Y. Cheng
Hongxia Yang
Nan Cao
Jingrui He
60
33
0
15 Feb 2021
What does LIME really see in images?
Damien Garreau
Dina Mardaoui
FAtt
64
40
0
11 Feb 2021
Rationally Inattentive Utility Maximization for Interpretable Deep Image Classification
Kunal Pattanayak
Vikram Krishnamurthy
19
0
0
09 Feb 2021
Towards a mathematical framework to inform Neural Network modelling via Polynomial Regression
Pablo Morala
Jenny Alexandra Cifuentes
R. Lillo
Iñaki Ucar
83
34
0
07 Feb 2021
Bandits for Learning to Explain from Explanations
Freya Behrens
Stefano Teso
Davide Mottin
FAtt
41
1
0
07 Feb 2021
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Ana Lucic
Maartje ter Hoeve
Gabriele Tolomei
Maarten de Rijke
Fabrizio Silvestri
213
146
0
05 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
111
26
0
04 Feb 2021
Directive Explanations for Actionable Explainability in Machine Learning Applications
Ronal Singh
Paul Dourish
Piers Howe
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
40
35
0
03 Feb 2021
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
Atefeh Shahroudnejad
FaML
AAML
AI4CE
XAI
119
36
0
02 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
120
149
0
01 Feb 2021
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
51
2
0
28 Jan 2021
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
Jennifer Cobbe
M. S. Lee
Jatinder Singh
61
77
0
26 Jan 2021
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
96
27
0
22 Jan 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
105
121
0
19 Jan 2021
Towards interpreting ML-based automated malware detection models: a survey
Yuzhou Lin
Xiaolin Chang
124
7
0
15 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
177
0
13 Jan 2021
Towards Interpretable Ensemble Learning for Image-based Malware Detection
Yuzhou Lin
Xiaolin Chang
AAML
65
8
0
13 Jan 2021
How Much Automation Does a Data Scientist Want?
Dakuo Wang
Q. V. Liao
Yunfeng Zhang
Udayan Khurana
Horst Samulowitz
Soya Park
Michael J. Muller
Lisa Amini
AI4CE
75
56
0
07 Jan 2021
Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study
D. David
Yehezkel S. Resheff
Talia Tron
44
24
0
05 Jan 2021
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
209
689
0
28 Dec 2020
Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model
Laura Giordano
Daniele Theseider Dupré
71
35
0
24 Dec 2020
On Relating 'Why?' and 'Why Not?' Explanations
Alexey Ignatiev
Nina Narodytska
Nicholas M. Asher
Sasha Rubin
XAI
FAtt
LRM
59
26
0
21 Dec 2020
Explaining Black-box Models for Biomedical Text Classification
M. Moradi
Matthias Samwald
74
21
0
20 Dec 2020
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory
Nazanin Fouladgar
Kary Främling
XAI
39
4
0
17 Dec 2020
On Exploiting Hitting Sets for Model Reconciliation
Stylianos Loukas Vasileiou
Alessandro Previti
William Yeoh
41
26
0
16 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
115
157
0
14 Dec 2020
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
47
0
0
13 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
64
1
0
13 Dec 2020
xRAI: Explainable Representations through AI
Christian Bartelt
Sascha Marton
Heiner Stuckenschmidt
8
2
0
10 Dec 2020
Hybrid analytic and machine-learned baryonic property insertion into galactic dark matter haloes
Ben Moews
R. Davé
Sourav Mitra
Sultan Hassan
W. Cui
AI4CE
132
7
0
10 Dec 2020
Influence-Driven Explanations for Bayesian Network Classifiers
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
92
9
0
10 Dec 2020
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
52
5
0
10 Dec 2020
Previous
1
2
3
...
15
16
17
...
21
22
23
Next