Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.07517
Cited By
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
20 March 2018
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges"
41 / 41 papers shown
Title
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
115
1
0
17 Feb 2023
Path-Specific Counterfactual Fairness
Silvia Chiappa
Thomas P. S. Gillam
CML
FaML
63
337
0
22 Feb 2018
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
W. James Murdoch
Peter J. Liu
Bin Yu
64
210
0
16 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
91
1,077
0
05 Jan 2018
Deep Learning: A Critical Appraisal
G. Marcus
HAI
VLM
117
1,040
0
02 Jan 2018
What do we need to build explainable AI systems for the medical domain?
Andreas Holzinger
Chris Biemann
C. Pattichis
D. Kell
66
689
0
28 Dec 2017
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning
Ambrish Rawat
Martin Wistuba
Maria-Irina Nicolae
BDL
AAML
49
39
0
22 Nov 2017
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
FAtt
62
146
0
16 Nov 2017
AOGNets: Compositional Grammatical Architectures for Deep Learning
Xilai Li
Xi Song
Tianfu Wu
59
26
0
15 Nov 2017
Towards Interpretable R-CNN by Unfolding Latent Structures
Tianfu Wu
Wei Sun
Xilai Li
Xi Song
Yangqiu Song
ObjD
30
20
0
14 Nov 2017
Intriguing Properties of Adversarial Examples
E. D. Cubuk
Barret Zoph
S. Schoenholz
Quoc V. Le
AAML
64
84
0
08 Nov 2017
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
91
684
0
02 Nov 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
70
48
0
02 Oct 2017
What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
Derek Doran
Sarah Schulz
Tarek R. Besold
XAI
66
438
0
02 Oct 2017
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
68
1,188
0
28 Aug 2017
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
Yinpeng Dong
Hang Su
Jun Zhu
Fan Bao
AAML
116
129
0
18 Aug 2017
A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop
Andreas Holzinger
M. Plass
K. Holzinger
G. Crişan
Camelia-M. Pintea
Vasile Palade
56
93
0
03 Aug 2017
Interpretable & Explorable Approximations of Black Box Models
Himabindu Lakkaraju
Ece Kamar
R. Caruana
J. Leskovec
FAtt
65
254
0
04 Jul 2017
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
278
2,257
0
24 Jun 2017
A simple neural network module for relational reasoning
Adam Santoro
David Raposo
David Barrett
Mateusz Malinowski
Razvan Pascanu
Peter W. Battaglia
Timothy Lillicrap
GNN
NAI
162
1,613
0
05 Jun 2017
Causal Effect Inference with Deep Latent-Variable Models
Christos Louizos
Uri Shalit
Joris Mooij
David Sontag
R. Zemel
Max Welling
CML
BDL
183
741
0
24 May 2017
Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car
Mariusz Bojarski
Philip Yeres
A. Choromańska
K. Choromanski
Bernhard Firner
L. Jackel
Urs Muller
69
400
0
25 Apr 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,517
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
180
3,865
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
169
2,878
0
14 Mar 2017
Improving Interpretability of Deep Neural Networks with Semantic Information
Yinpeng Dong
Hang Su
Jun Zhu
Bo Zhang
51
125
0
12 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
376
3,776
0
28 Feb 2017
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
FAtt
132
707
0
15 Feb 2017
Automatic Rule Extraction from Long Short Term Memory Networks
W. James Murdoch
Arthur Szlam
55
87
0
08 Feb 2017
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAML
MILM
86
564
0
24 Dec 2016
Investigating the influence of noise and distractors on the interpretation of neural networks
Pieter-Jan Kindermans
Kristof T. Schütt
K. Müller
Sven Dähne
FAtt
65
125
0
22 Nov 2016
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
47
64
0
17 Nov 2016
Semantics derived automatically from language corpora contain human-like biases
Aylin Caliskan
J. Bryson
Arvind Narayanan
195
2,661
0
25 Aug 2016
A Taxonomy and Library for Visualizing Learned Features in Convolutional Neural Networks
Felix Grün
Christian Rupprecht
Nassir Navab
Federico Tombari
SSL
FAtt
62
76
0
24 Jun 2016
Auditing Black-box Models for Indirect Influence
Philip Adler
Casey Falk
Sorelle A. Friedler
Gabriel Rybeck
C. Scheidegger
Brandon Smith
Suresh Venkatasubramanian
TDI
MLAU
136
290
0
23 Feb 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.0K
16,931
0
16 Feb 2016
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
66
3,676
0
08 Feb 2016
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
245
14,893
1
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
293
7,279
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
519
15,861
0
12 Nov 2013
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
126
1,102
0
06 Dec 2009
1