ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.04200
  4. Cited By
NormLime: A New Feature Importance Metric for Explaining Deep Neural
  Networks
v1v2 (latest)

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

10 September 2019
Isaac Ahern
Adam Noack
Luis Guzman-Nateras
Dejing Dou
Boyang Albert Li
Jun Huan
    FAtt
ArXiv (abs)PDFHTML

Papers citing "NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks"

21 / 21 papers shown
Title
SurvLIME: A method for explaining machine learning survival models
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
274
91
0
18 Mar 2020
Global Explanations of Neural Networks: Mapping the Landscape of
  Predictions
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
FAtt
81
118
0
06 Feb 2019
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
64
130
0
23 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,970
0
08 Oct 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAttTDI
115
216
0
08 Aug 2018
Noise-adding Methods of Saliency Map as Series of Higher Order Partial
  Derivative
Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative
Junghoon Seo
J. Choe
Jamyoung Koo
Seunghyeon Jeon
Beomsu Kim
Taegyun Jeon
FAttODL
48
29
0
08 Jun 2018
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
118
117
0
07 Jun 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
82
264
0
10 Jan 2018
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
210
2,236
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAIFAtt
79
341
0
16 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,884
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,027
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,708
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Explaining NonLinear Classification Decisions with Deep Taylor
  Decomposition
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
71
739
0
08 Dec 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
How to Explain Individual Classification Decisions
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
146
1,104
0
06 Dec 2009
1