Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.12116
Cited By
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
27 November 2019
Vanessa Buhrmester
David Münch
Michael Arens
MLAU
FaML
XAI
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey"
50 / 70 papers shown
Title
Distilling Machine Learning's Added Value: Pareto Fronts in Atmospheric Applications
Tom Beucler
Arthur Grundner
Sara Shamekh
Peter Ukkonen
Matthew Chantry
Ryan Lagerquist
73
0
0
04 Aug 2024
On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models
Yue Liu
Alexandar J. Thomson
Matthew M. Engelhard
David Page
David Page
BDL
AI4CE
160
0
0
27 May 2023
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
48
773
0
16 Nov 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
152
602
0
16 Jun 2020
An Investigation of COVID-19 Spreading Factors with Explainable AI Techniques
Xiuyi Fan
Siyuan Liu
Jiarong Chen
T. Henderson
30
7
0
05 May 2020
DeepCOVIDExplainer: Explainable COVID-19 Diagnosis Based on Chest X-ray Images
Md. Rezaul Karim
Till Dohmen
Dietrich-Rebholz Schuhmann
Stefan Decker
Michael Cochez
Oya Beyan
53
88
0
09 Apr 2020
Attacking Optical Flow
Anurag Ranjan
J. Janai
Andreas Geiger
Michael J. Black
AAML
3DPC
60
87
0
22 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
66
415
0
18 Oct 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
39
217
0
04 Apr 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
58
453
0
27 Jan 2019
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques
Joel Escudé Font
Marta R. Costa-jussá
53
170
0
10 Jan 2019
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos
Patricia Rubisch
Claudio Michaelis
Matthias Bethge
Felix Wichmann
Wieland Brendel
100
2,668
0
29 Nov 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
105
681
0
28 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
86
1,858
0
31 May 2018
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization
A. Bose
P. Aarabi
AAML
40
89
0
31 May 2018
Multimodal Explanations: Justifying Decisions and Pointing to the Evidence
Dong Huk Park
Lisa Anne Hendricks
Zeynep Akata
Anna Rohrbach
Bernt Schiele
Trevor Darrell
Marcus Rohrbach
73
421
0
15 Feb 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,957
0
06 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
99
242
0
02 Feb 2018
A General Framework for Adversarial Examples with Objectives
Mahmood Sharif
Sruti Bhagavatula
Lujo Bauer
Michael K. Reiter
AAML
GAN
51
193
0
31 Dec 2017
An Introduction to Deep Visual Explanation
H. Babiker
Randy Goebel
FAtt
AAML
52
19
0
26 Nov 2017
Using KL-divergence to focus Deep Visual Explanation
H. Babiker
Randy Goebel
FAtt
54
12
0
17 Nov 2017
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
106
2,297
0
30 Oct 2017
Adversarial Examples for Evaluating Reading Comprehension Systems
Robin Jia
Percy Liang
AAML
ELM
196
1,605
0
23 Jul 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
201
2,221
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,906
0
22 May 2017
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
73
339
0
16 May 2017
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
146
1,516
1
19 Apr 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,519
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
198
3,871
0
10 Apr 2017
Learning to Generate Reviews and Discovering Sentiment
Alec Radford
Rafal Jozefowicz
Ilya Sutskever
93
509
0
05 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
182
5,986
0
04 Mar 2017
Opening the Black Box of Deep Neural Networks via Information
Ravid Shwartz-Ziv
Naftali Tishby
AI4CE
98
1,409
0
02 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
399
3,787
0
28 Feb 2017
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
FAtt
132
708
0
15 Feb 2017
TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning
Jayaraman J. Thiagarajan
B. Kailkhura
P. Sattigeri
Karthikeyan N. Ramamurthy
54
38
0
22 Nov 2016
VisualBackProp: efficient visualization of CNNs
Mariusz Bojarski
A. Choromańska
K. Choromanski
Bernhard Firner
L. Jackel
Urs Muller
Karol Zieba
FAtt
65
74
0
16 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
297
20,003
0
07 Oct 2016
RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism
Edward Choi
M. T. Bahadori
Joshua A. Kulas
A. Schuetz
Walter F. Stewart
Jimeng Sun
AI4TS
115
1,245
0
19 Aug 2016
Top-down Neural Attention by Excitation Backprop
Jianming Zhang
Zhe Lin
Jonathan Brandt
Xiaohui Shen
Stan Sclaroff
79
947
0
01 Aug 2016
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
CVBM
FaML
107
3,135
0
21 Jul 2016
European Union regulations on algorithmic decision-making and a "right to explanation"
B. Goodman
Seth Flaxman
FaML
AILaw
63
1,900
0
28 Jun 2016
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
110
812
0
13 Jun 2016
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,699
0
10 Jun 2016
The Latin American Giant Observatory: a successful collaboration in Latin America based on Cosmic Rays and computer science domains
Hernán Asorey
R. Mayo-García
L. Núñez
M. Pascual
A. J. Rubio-Montero
M. Suárez-Durán
L. A. Torres-Niño
81
5
0
30 May 2016
Interpretable Deep Neural Networks for Single-Trial EEG Classification
I. Sturm
Sebastian Bach
Wojciech Samek
K. Müller
58
353
0
27 Apr 2016
Colorful Image Colorization
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
127
3,529
0
28 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,976
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
250
9,308
0
14 Dec 2015
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,878
0
10 Dec 2015
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
148
4,895
0
14 Nov 2015
1
2
Next