ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

21 / 2,871 papers shown
Title
Exact and Consistent Interpretation for Piecewise Linear Neural
  Networks: A Closed Form Solution
Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution
Lingyang Chu
X. Hu
Juhua Hu
Lanjun Wang
J. Pei
65
100
0
17 Feb 2018
Influence-Directed Explanations for Deep Convolutional Networks
Influence-Directed Explanations for Deep Convolutional Networks
Klas Leino
S. Sen
Anupam Datta
Matt Fredrikson
Linyi Li
TDIFAtt
113
75
0
11 Feb 2018
Granger-causal Attentive Mixtures of Experts: Learning Important
  Features with Neural Networks
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
Patrick Schwab
Djordje Miladinovic
W. Karlen
CML
107
57
0
06 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
263
4,011
0
06 Feb 2018
Evaluating neural network explanation methods using hybrid documents and
  morphological agreement
Evaluating neural network explanation methods using hybrid documents and morphological agreement
Nina Pörner
Benjamin Roth
Hinrich Schütze
66
9
0
19 Jan 2018
Beyond Word Importance: Contextual Decomposition to Extract Interactions
  from LSTMs
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
W. James Murdoch
Peter J. Liu
Bin Yu
93
210
0
16 Jan 2018
Beyond saliency: understanding convolutional neural networks from
  saliency prediction on layer-wise relevance propagation
Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
Heyi Li
Yunke Tian
Klaus Mueller
Xin Chen
FAtt
77
42
0
22 Dec 2017
Anesthesiologist-level forecasting of hypoxemia with only SpO2 data
  using deep learning
Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning
G. Erion
Hugh Chen
Scott M. Lundberg
Su-In Lee
59
6
0
02 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
272
1,851
0
30 Nov 2017
Visual Feature Attribution using Wasserstein GANs
Visual Feature Attribution using Wasserstein GANs
Christian F. Baumgartner
Lisa M. Koch
K. Tezcan
Jia Xi Ang
E. Konukoglu
GANMedIm
103
145
0
24 Nov 2017
Towards better understanding of gradient-based attribution methods for
  Deep Neural Networks
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
FAtt
98
147
0
16 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAttAAML
72
12
0
15 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAttXAI
114
689
0
02 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
153
875
0
29 Oct 2017
Case Study: Explaining Diabetic Retinopathy Detection Deep CNNs via Integrated Gradients
Linyi Li
Matt Fredrikson
S. Sen
Anupam Datta
FAtt
22
1
0
27 Sep 2017
Axiomatic Characterization of Data-Driven Influence Measures for
  Classification
Axiomatic Characterization of Data-Driven Influence Measures for Classification
Jakub Sliwinski
Martin Strobel
Yair Zick
TDI
74
14
0
07 Aug 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
111
41
0
22 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
234
2,237
0
12 Jun 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAIFAtt
98
340
0
16 May 2017
Detecting Statistical Interactions from Neural Network Weights
Detecting Statistical Interactions from Neural Network Weights
Michael Tsang
Dehua Cheng
Yan Liu
99
193
0
14 May 2017
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Ethan R. Elenberg
A. Dimakis
Moran Feldman
Amin Karbasi
113
89
0
08 Mar 2017
Previous
123...565758