Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.04200
Cited By
v1
v2 (latest)
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
10 September 2019
Isaac Ahern
Adam Noack
Luis Guzman-Nateras
Dejing Dou
Boyang Albert Li
Jun Huan
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks"
21 / 21 papers shown
Title
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
272
91
0
18 Mar 2020
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
FAtt
81
118
0
06 Feb 2019
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
64
130
0
23 Oct 2018
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
152
1,970
0
08 Oct 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAtt
TDI
115
216
0
08 Aug 2018
Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative
Junghoon Seo
J. Choe
Jamyoung Koo
Seunghyeon Jeon
Beomsu Kim
Taegyun Jeon
FAtt
ODL
48
29
0
08 Jun 2018
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
118
117
0
07 Jun 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
80
264
0
10 Jan 2018
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
210
2,236
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
79
341
0
16 May 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,884
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
193
6,027
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,708
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,071
0
16 Feb 2016
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
71
739
0
08 Dec 2015
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
146
1,104
0
06 Dec 2009
1