ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAttAAML
82
129
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
211
1,973
0
08 Oct 2018
On the Art and Science of Machine Learning Explanations
On the Art and Science of Machine Learning Explanations
Patrick Hall
FAttXAI
92
30
0
05 Oct 2018
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
84
19
0
03 Oct 2018
Training Machine Learning Models by Regularizing their Explanations
Training Machine Learning Models by Regularizing their Explanations
A. Ross
FaML
63
0
0
29 Sep 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
55
157
0
29 Sep 2018
Rethinking Self-driving: Multi-task Knowledge for Better Generalization
  and Accident Explanation Ability
Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability
Zhihao Li
Toshiyuki Motoyoshi
Kazuma Sasaki
T. Ogata
S. Sugano
LRM
77
39
0
28 Sep 2018
Response Characterization for Auditing Cell Dynamics in Long Short-term
  Memory Networks
Response Characterization for Auditing Cell Dynamics in Long Short-term Memory Networks
Ramin M. Hasani
Alexander Amini
Mathias Lechner
Felix Naser
Radu Grosu
Daniela Rus
57
25
0
11 Sep 2018
Interpreting Neural Networks With Nearest Neighbors
Interpreting Neural Networks With Nearest Neighbors
Eric Wallace
Shi Feng
Jordan L. Boyd-Graber
AAMLFAttMILM
140
54
0
08 Sep 2018
DeepPINK: reproducible feature selection in deep neural networks
DeepPINK: reproducible feature selection in deep neural networks
Yang Young Lu
Yingying Fan
Jinchi Lv
William Stafford Noble
FAtt
178
125
0
04 Sep 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew E. Peters
Mark Neumann
Luke Zettlemoyer
Wen-tau Yih
113
434
0
27 Aug 2018
Deep Learning: Computational Aspects
Deep Learning: Computational Aspects
Nicholas G. Polson
Vadim Sokolov
PINNBDLAI4CE
58
14
0
26 Aug 2018
Shedding Light on Black Box Machine Learning Algorithms: Development of
  an Axiomatic Framework to Assess the Quality of Methods that Explain
  Individual Predictions
Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions
Milo Honegger
52
35
0
15 Aug 2018
iNNvestigate neural networks!
iNNvestigate neural networks!
Maximilian Alber
Sebastian Lapuschkin
P. Seegerer
Miriam Hagele
Kristof T. Schütt
G. Montavon
Wojciech Samek
K. Müller
Sven Dähne
Pieter-Jan Kindermans
79
349
0
13 Aug 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAttTDI
117
217
0
08 Aug 2018
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Aditya K
Slawomir Grzonkowski
NhienAn Lekhac
44
27
0
03 Aug 2018
Efficient Purely Convolutional Text Encoding
Efficient Purely Convolutional Text Encoding
Szymon Malik
A. Lancucki
J. Chorowski
3DV
36
1
0
03 Aug 2018
Symbolic Execution for Deep Neural Networks
Symbolic Execution for Deep Neural Networks
D. Gopinath
Kaiyuan Wang
Mengshi Zhang
C. Păsăreanu
S. Khurshid
AAML
81
54
0
27 Jul 2018
Computationally Efficient Measures of Internal Neuron Importance
Computationally Efficient Measures of Internal Neuron Importance
Avanti Shrikumar
Jocelin Su
A. Kundaje
FAtt
61
30
0
26 Jul 2018
Knockoffs for the mass: new feature importance statistics with false
  discovery guarantees
Knockoffs for the mass: new feature importance statistics with false discovery guarantees
Jaime Roquero Gimenez
Amirata Ghorbani
James Zou
CML
88
55
0
17 Jul 2018
Model Reconstruction from Model Explanations
Model Reconstruction from Model Explanations
S. Milli
Ludwig Schmidt
Anca Dragan
Moritz Hardt
FAtt
66
179
0
13 Jul 2018
Direct Uncertainty Prediction for Medical Second Opinions
Direct Uncertainty Prediction for Medical Second Opinions
M. Raghu
Katy Blumer
Rory Sayres
Ziad Obermeyer
Robert D. Kleinberg
S. Mullainathan
Jon M. Kleinberg
OODUD
135
137
0
04 Jul 2018
A Unified Approach to Quantifying Algorithmic Unfairness: Measuring
  Individual & Group Unfairness via Inequality Indices
A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices
Till Speicher
Hoda Heidari
Nina Grgic-Hlaca
Krishna P. Gummadi
Adish Singla
Adrian Weller
Muhammad Bilal Zafar
FaML
106
265
0
02 Jul 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAttUQCV
134
684
0
28 Jun 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
330
1,193
0
27 Jun 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
75
40
0
22 Jun 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
121
530
0
21 Jun 2018
RUDDER: Return Decomposition for Delayed Rewards
RUDDER: Return Decomposition for Delayed Rewards
Jose A. Arjona-Medina
Michael Gillhofer
Michael Widrich
Thomas Unterthiner
Johannes Brandstetter
Sepp Hochreiter
130
222
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILMXAI
140
948
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
79
82
0
19 Jun 2018
Maximally Invariant Data Perturbation as Explanation
Maximally Invariant Data Perturbation as Explanation
Satoshi Hara
Kouichi Ikeno
Tasuku Soma
Takanori Maehara
AAML
72
8
0
19 Jun 2018
Detecting and interpreting myocardial infarction using fully
  convolutional neural networks
Detecting and interpreting myocardial infarction using fully convolutional neural networks
Nils Strodthoff
C. Strodthoff
100
154
0
18 Jun 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
84
146
0
14 Jun 2018
A Note about: Local Explanation Methods for Deep Neural Networks lack
  Sensitivity to Parameter Values
A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values
Mukund Sundararajan
Ankur Taly
FAtt
46
21
0
11 Jun 2018
Building Bayesian Neural Networks with Blocks: On Structure,
  Interpretability and Uncertainty
Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty
Hao Zhou
Yunyang Xiong
Vikas Singh
UQCVBDL
85
4
0
10 Jun 2018
Explainable Neural Networks based on Additive Index Models
Explainable Neural Networks based on Additive Index Models
J. Vaughan
Agus Sudjianto
Erind Brahimi
Jie Chen
V. Nair
79
106
0
05 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
129
1,873
0
31 May 2018
How Important Is a Neuron?
How Important Is a Neuron?
Kedar Dhamdhere
Mukund Sundararajan
Qiqi Yan
FAttGNN
77
131
0
30 May 2018
Semantic Network Interpretation
Semantic Network Interpretation
Pei Guo
Ryan Farrell
MILMFAtt
34
0
0
23 May 2018
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class
  Models
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models
Jacob R. Kauffmann
K. Müller
G. Montavon
DRL
77
98
0
16 May 2018
Did the Model Understand the Question?
Did the Model Understand the Question?
Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
ELMOODFAtt
85
200
0
14 May 2018
Modeling Psychotherapy Dialogues with Kernelized Hashcode
  Representations: A Nonparametric Information-Theoretic Approach
Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach
S. Garg
Irina Rish
Guillermo Cecchi
Palash Goyal
Sarik Ghazarian
Shuyang Gao
Greg Ver Steeg
Aram Galstyan
70
0
0
26 Apr 2018
Pathologies of Neural Models Make Interpretations Difficult
Pathologies of Neural Models Make Interpretations Difficult
Shi Feng
Eric Wallace
Alvin Grissom II
Mohit Iyyer
Pedro Rodriguez
Jordan L. Boyd-Graber
AAMLFAtt
108
322
0
20 Apr 2018
Understanding Regularization to Visualize Convolutional Neural Networks
Understanding Regularization to Visualize Convolutional Neural Networks
Maximilian Baust
Florian Ludwig
Christian Rupprecht
Matthias Kohl
S. Braunewell
FAtt
54
4
0
20 Apr 2018
Egocentric 6-DoF Tracking of Small Handheld Objects
Egocentric 6-DoF Tracking of Small Handheld Objects
Rohit Pandey
Pavel Pidlypenskyi
Shuoran Yang
Christine Kaeser-Chen
25
4
0
16 Apr 2018
Scalable and Interpretable One-class SVMs with Deep Learning and Random
  Fourier features
Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features
Minh-Nghia Nguyen
Ngo Anh Vien
59
35
0
13 Apr 2018
Generative Visual Rationales
Generative Visual Rationales
J. Seah
Jennifer S. N. Tang
Andy Kitchen
Jonathan Seah
MedIm
34
1
0
04 Apr 2018
Towards Explanation of DNN-based Prediction with Guided Feature
  Inversion
Towards Explanation of DNN-based Prediction with Guided Feature Inversion
Mengnan Du
Ninghao Liu
Qingquan Song
Helen Zhou
FAtt
106
127
0
19 Mar 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLTFAtt
186
576
0
21 Feb 2018
Finding Influential Training Samples for Gradient Boosted Decision Trees
Finding Influential Training Samples for Gradient Boosted Decision Trees
B. Sharchilev
Yury Ustinovsky
P. Serdyukov
Maarten de Rijke
TDI
74
57
0
19 Feb 2018
Previous
123...565758
Next