ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Counterfactual Explanation Based on Gradual Construction for Deep
  Networks
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OODFAtt
94
24
0
05 Aug 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
75
61
0
04 Aug 2020
A Causal Lens for Peeking into Black Box Predictive Models: Predictive
  Model Interpretation via Causal Attribution
A Causal Lens for Peeking into Black Box Predictive Models: Predictive Model Interpretation via Causal Attribution
A. Khademi
Vasant Honavar
CML
44
9
0
01 Aug 2020
On the Generalizability of Neural Program Models with respect to
  Semantic-Preserving Program Transformations
On the Generalizability of Neural Program Models with respect to Semantic-Preserving Program Transformations
Md Rafiqul Islam Rabin
Nghi D. Q. Bui
Ke Wang
Yijun Yu
Lingxiao Jiang
Mohammad Amin Alipour
154
90
0
31 Jul 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
91
472
0
31 Jul 2020
Supervised Machine Learning Techniques: An Overview with Applications to
  Banking
Supervised Machine Learning Techniques: An Overview with Applications to Banking
Linwei Hu
Jie Chen
J. Vaughan
Hanyu Yang
Kelly Wang
Agus Sudjianto
V. Nair
52
23
0
28 Jul 2020
Reachable Sets of Classifiers and Regression Models: (Non-)Robustness
  Analysis and Robust Training
Reachable Sets of Classifiers and Regression Models: (Non-)Robustness Analysis and Robust Training
Anna-Kathrin Kopetzki
Stephan Günnemann
59
4
0
28 Jul 2020
The MAMe Dataset: On the relevance of High Resolution and Variable Shape
  image properties
The MAMe Dataset: On the relevance of High Resolution and Variable Shape image properties
Ferran Parés
Anna Arias-Duart
Dario Garcia-Gasulla
Gema Campo-Francés
Nina Viladrich
Eduard Ayguadé
Jesús Labarta
41
6
0
27 Jul 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
  Prediction
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAttLRM
81
71
0
23 Jul 2020
Rethinking CNN Models for Audio Classification
Rethinking CNN Models for Audio Classification
Kamalesh Palanisamy
Dipika Singhania
Angela Yao
SSL
83
146
0
22 Jul 2020
Pattern-Guided Integrated Gradients
Pattern-Guided Integrated Gradients
Robert Schwarzenberg
Steffen Castle
66
1
0
21 Jul 2020
Melody: Generating and Visualizing Machine Learning Model Summary to
  Understand Data and Classifiers Together
Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together
G. Chan
E. Bertini
L. G. Nonato
Brian Barr
Claudio T. Silva
40
17
0
21 Jul 2020
Fairwashing Explanations with Off-Manifold Detergent
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAttFaML
66
97
0
20 Jul 2020
Temporal Pointwise Convolutional Networks for Length of Stay Prediction
  in the Intensive Care Unit
Temporal Pointwise Convolutional Networks for Length of Stay Prediction in the Intensive Care Unit
Emma Rocheteau
Pietro Lio
Stephanie L. Hyland
OOD
51
60
0
18 Jul 2020
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Yunqing Zhao
Ngai-Man Cheung
Alexander Binder
74
90
0
17 Jul 2020
Understanding and Diagnosing Vulnerability under Adversarial Attacks
Understanding and Diagnosing Vulnerability under Adversarial Attacks
Haizhong Zheng
Ziqi Zhang
Honglak Lee
A. Prakash
FAttAAML
76
6
0
17 Jul 2020
Modern Hopfield Networks and Attention for Immune Repertoire
  Classification
Modern Hopfield Networks and Attention for Immune Repertoire Classification
Michael Widrich
Bernhard Schafl
Hubert Ramsauer
Milena Pavlović
Lukas Gruber
...
Johannes Brandstetter
G. K. Sandve
Victor Greiff
Sepp Hochreiter
Günter Klambauer
243
119
0
16 Jul 2020
Deep Learning in Protein Structural Modeling and Design
Deep Learning in Protein Structural Modeling and Design
Wenhao Gao
S. Mahajan
Jeremias Sulam
Jeffrey J. Gray
89
161
0
16 Jul 2020
Learning Invariances for Interpretability using Supervised VAE
Learning Invariances for Interpretability using Supervised VAE
An-phi Nguyen
María Rodríguez Martínez
DRL
35
2
0
15 Jul 2020
On quantitative aspects of model interpretability
On quantitative aspects of model interpretability
An-phi Nguyen
María Rodríguez Martínez
73
115
0
15 Jul 2020
Concept Learners for Few-Shot Learning
Concept Learners for Few-Shot Learning
Kaidi Cao
Maria Brbic
J. Leskovec
VLMOffRL
92
4
0
14 Jul 2020
Towards causal benchmarking of bias in face analysis algorithms
Towards causal benchmarking of bias in face analysis algorithms
Guha Balakrishnan
Yuanjun Xiong
Wei Xia
Pietro Perona
CVBM
77
90
0
13 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAttAAML
100
37
0
13 Jul 2020
Monitoring and explainability of models in production
Monitoring and explainability of models in production
Janis Klaise
A. V. Looveren
Clive Cox
G. Vacanti
Alexandru Coca
116
49
0
13 Jul 2020
Exclusion and Inclusion -- A model agnostic approach to feature
  importance in DNNs
Exclusion and Inclusion -- A model agnostic approach to feature importance in DNNs
S. Maji
Arijit Ghosh Chowdhury
Raghav Bali
Vamsi M Bhandaru
32
2
0
13 Jul 2020
Interpretable, Multidimensional, Multimodal Anomaly Detection with
  Negative Sampling for Detection of Device Failure
Interpretable, Multidimensional, Multimodal Anomaly Detection with Negative Sampling for Detection of Device Failure
John Sipple
58
55
0
12 Jul 2020
Usefulness of interpretability methods to explain deep learning based
  plant stress phenotyping
Usefulness of interpretability methods to explain deep learning based plant stress phenotyping
Koushik Nagasubramanian
Asheesh K. Singh
Arti Singh
Soumik Sarkar
Baskar Ganapathysubramanian
FAtt
36
16
0
11 Jul 2020
Fast Real-time Counterfactual Explanations
Fast Real-time Counterfactual Explanations
Yunxia Zhao
68
15
0
11 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffMMedIm
134
21
0
10 Jul 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
126
836
0
09 Jul 2020
PointMask: Towards Interpretable and Bias-Resilient Point Cloud
  Processing
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
Saeid Asgari Taghanaki
Kaveh Hassani
P. Jayaraman
Amir Hosein Khas Ahmadi
Tonya Custis
3DPC
54
8
0
09 Jul 2020
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics,
  and Datasets
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and Datasets
Junsuk Choe
Seong Joon Oh
Sanghyuk Chun
Seungho Lee
Zeynep Akata
Hyunjung Shim
WSOL
460
25
0
08 Jul 2020
An exploration of the influence of path choice in game-theoretic
  attribution algorithms
An exploration of the influence of path choice in game-theoretic attribution algorithms
Geoff Ward
S. Kamkar
Jay Budzik
TDIFAtt
28
1
0
08 Jul 2020
Human Trajectory Forecasting in Crowds: A Deep Learning Perspective
Human Trajectory Forecasting in Crowds: A Deep Learning Perspective
Parth Kothari
S. Kreiss
Alexandre Alahi
HAIAI4TS
102
242
0
07 Jul 2020
ProtoryNet - Interpretable Text Classification Via Prototype
  Trajectories
ProtoryNet - Interpretable Text Classification Via Prototype Trajectories
Dat Hong
Tong Wang
Stephen S. Baek
AI4TS
59
0
0
03 Jul 2020
Explainable Deep One-Class Classification
Explainable Deep One-Class Classification
Philipp Liznerski
Lukas Ruff
Robert A. Vandermeulen
Billy Joe Franks
Marius Kloft
Klaus-Robert Muller
101
199
0
03 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
195
645
0
01 Jul 2020
Scaling Symbolic Methods using Gradients for Neural Model Explanation
Scaling Symbolic Methods using Gradients for Neural Model Explanation
Subham S. Sahoo
Subhashini Venugopalan
Li Li
Rishabh Singh
Patrick F. Riley
FAtt
107
8
0
29 Jun 2020
Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes
Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes
Loc Trinh
Michael Tsang
Sirisha Rambhatla
Yan Liu
20
6
0
28 Jun 2020
Causality Learning: A New Perspective for Interpretable Machine Learning
Causality Learning: A New Perspective for Interpretable Machine Learning
Guandong Xu
Tri Dung Duong
Q. Li
S. Liu
Xianzhi Wang
XAIOODCML
62
53
0
27 Jun 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
119
295
0
26 Jun 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
144
607
0
26 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAMLFAtt
132
66
0
26 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
103
73
0
24 Jun 2020
Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using
  Self-supervised Edge Features and Graph Neural Networks
Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using Self-supervised Edge Features and Graph Neural Networks
Arijit Sehanobish
N. Ravindra
David van Dijk
SSL
45
16
0
23 Jun 2020
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
85
60
0
19 Jun 2020
How does this interaction affect me? Interpretable attribution for
  feature interactions
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
75
88
0
19 Jun 2020
Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives
Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives
Elisa Kreiss
Zijian Wang
Christopher Potts
23
1
0
17 Jun 2020
Model Explanations with Differential Privacy
Model Explanations with Differential Privacy
Neel Patel
Reza Shokri
Yair Zick
SILMFedML
143
32
0
16 Jun 2020
High Dimensional Model Explanations: an Axiomatic Approach
High Dimensional Model Explanations: an Axiomatic Approach
Neel Patel
Martin Strobel
Yair Zick
FAtt
63
20
0
16 Jun 2020
Previous
123...484950...565758
Next