ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,873 papers shown
Title
Interpreting BERT-based Text Similarity via Activation and Saliency Maps
Interpreting BERT-based Text Similarity via Activation and Saliency Maps
Itzik Malkiel
Dvir Ginzburg
Oren Barkan
Avi Caciularu
Jonathan Weill
Noam Koenigstein
81
21
0
13 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAttXAI
36
5
0
12 Aug 2022
Comparing Baseline Shapley and Integrated Gradients for Local
  Explanation: Some Additional Insights
Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights
Tianshu Feng
Zhipu Zhou
Tarun Joshi
V. Nair
FAtt
52
5
0
12 Aug 2022
A Multimodal Transformer: Fusing Clinical Notes with Structured EHR Data
  for Interpretable In-Hospital Mortality Prediction
A Multimodal Transformer: Fusing Clinical Notes with Structured EHR Data for Interpretable In-Hospital Mortality Prediction
Weimin Lyu
Xinyu Dong
Rachel Wong
Songzhu Zheng
Kayley Abell-Hart
Fusheng Wang
Chao Chen
115
52
0
09 Aug 2022
Learning to Learn to Predict Performance Regressions in Production at
  Meta
Learning to Learn to Predict Performance Regressions in Production at Meta
M. Beller
Hongyu Li
V. Nair
V. Murali
Imad Ahmad
Jürgen Cito
Drew Carlson
Gareth Ari Aye
Wes Dyer
82
5
0
08 Aug 2022
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models
Jinyu Fan
Yi Zeng
AAML
65
1
0
08 Aug 2022
Are Gradients on Graph Structure Reliable in Gray-box Attacks?
Are Gradients on Graph Structure Reliable in Gray-box Attacks?
Zihan Liu
Yun Luo
Lirong Wu
Siyuan Li
Zicheng Liu
Stan Z. Li
AAML
107
23
0
07 Aug 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
75
33
0
07 Aug 2022
Generalizability Analysis of Graph-based Trajectory Predictor with
  Vectorized Representation
Generalizability Analysis of Graph-based Trajectory Predictor with Vectorized Representation
Juanwu Lu
Wei Zhan
Masayoshi Tomizuka
Yeping Hu
74
6
0
06 Aug 2022
Parameter Averaging for Feature Ranking
Parameter Averaging for Feature Ranking
Talip Uçar
Ehsan Hajiramezanali
45
0
0
05 Aug 2022
Differentially Private Counterfactuals via Functional Mechanism
Differentially Private Counterfactuals via Functional Mechanism
Fan Yang
Qizhang Feng
Kaixiong Zhou
Jiahao Chen
Helen Zhou
81
9
0
04 Aug 2022
ferret: a Framework for Benchmarking Explainers on Transformers
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio
Eliana Pastor
C. Bonaventura
Debora Nozza
90
31
0
02 Aug 2022
s-LIME: Reconciling Locality and Fidelity in Linear Explanations
s-LIME: Reconciling Locality and Fidelity in Linear Explanations
Romaric Gaudel
Luis Galárraga
J. Delaunay
L. Rozé
Vaishnavi Bhargava
FAtt
60
16
0
02 Aug 2022
Visual Interpretable and Explainable Deep Learning Models for Brain
  Tumor MRI and COVID-19 Chest X-ray Images
Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images
Yusuf Brima
M. Atemkeng
FAttMedIm
83
1
0
01 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAIFAttLRM
164
63
0
29 Jul 2022
Claim-Dissector: An Interpretable Fact-Checking System with Joint
  Re-ranking and Veracity Prediction
Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction
Martin Fajcik
P. Motlícek
Pavel Smrz
110
21
0
28 Jul 2022
Unit Testing for Concepts in Neural Networks
Unit Testing for Concepts in Neural Networks
Charles Lovering
Ellie Pavlick
75
28
0
28 Jul 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying-Cong Chen
Xinyan Xiao
Jing Liu
Hua Wu
81
4
0
28 Jul 2022
ReFRS: Resource-efficient Federated Recommender System for Dynamic and
  Diversified User Preferences
ReFRS: Resource-efficient Federated Recommender System for Dynamic and Diversified User Preferences
Mubashir Imran
Hongzhi Yin
Tong Chen
Nguyen Quoc Viet Hung
Alexander Zhou
Kai Zheng
85
72
0
28 Jul 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAMLAI4CE
135
134
0
27 Jul 2022
An Explainable Decision Support System for Predictive Process Analytics
An Explainable Decision Support System for Predictive Process Analytics
Riccardo Galanti
M. Leoni
M. Monaro
Nicoló Navarin
Alan Marazzi
Brigida Di Stasi
Stéphanie Maldera
93
28
0
26 Jul 2022
Inter-model Interpretability: Self-supervised Models as a Case Study
Inter-model Interpretability: Self-supervised Models as a Case Study
Ahmad Mustapha
Wael Khreich
Wassim Masri
SSL
24
0
0
24 Jul 2022
A general-purpose method for applying Explainable AI for Anomaly
  Detection
A general-purpose method for applying Explainable AI for Anomaly Detection
John Sipple
Abdou Youssef
87
17
0
23 Jul 2022
Deep neural network heatmaps capture Alzheimer's disease patterns
  reported in a large meta-analysis of neuroimaging studies
Deep neural network heatmaps capture Alzheimer's disease patterns reported in a large meta-analysis of neuroimaging studies
Dingqian Wang
N. Honnorat
P. Fox
K. Ritter
Simon B. Eickhoff
S. Seshadri
Mohamad Habes
66
37
0
22 Jul 2022
TRUST-LAPSE: An Explainable and Actionable Mistrust Scoring Framework
  for Model Monitoring
TRUST-LAPSE: An Explainable and Actionable Mistrust Scoring Framework for Model Monitoring
Nandita Bhaskhar
D. Rubin
Christopher Lee-Messer
46
5
0
22 Jul 2022
Privacy and Transparency in Graph Machine Learning: A Unified
  Perspective
Privacy and Transparency in Graph Machine Learning: A Unified Perspective
Megha Khosla
70
4
0
22 Jul 2022
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use
  Case-adadpted Methods and Critical Evaluation
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation
Oliver Mey
Deniz Neufeld
62
24
0
21 Jul 2022
Lazy Estimation of Variable Importance for Large Neural Networks
Lazy Estimation of Variable Importance for Large Neural Networks
Yue Gao
Abby Stevens
Rebecca Willett
Garvesh Raskutti
119
4
0
19 Jul 2022
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection
  and Forensics
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection and Forensics
Wai Weng Lo
Gayan K. Kulatilleke
Mohanad Sarhan
S. Layeghy
Marius Portmann
86
46
0
19 Jul 2022
Task-aware Similarity Learning for Event-triggered Time Series
Task-aware Similarity Learning for Event-triggered Time Series
Shaoyu Dou
Kai Yang
Yang Jiao
Chengbo Qiu
Kui Ren
AI4TS
54
0
0
17 Jul 2022
Towards Explainability in NLP: Analyzing and Calculating Word Saliency
  through Word Properties
Towards Explainability in NLP: Analyzing and Calculating Word Saliency through Word Properties
Jialiang Dong
Zhitao Guan
Longfei Wu
Zijian Zhang
Xiaojiang Du
XAIAAMLFAttMILM
95
2
0
17 Jul 2022
MDM: Multiple Dynamic Masks for Visual Explanation of Neural Networks
MDM: Multiple Dynamic Masks for Visual Explanation of Neural Networks
Yitao Peng
Longzhen Yang
Yihang Liu
Lianghua He
39
0
0
17 Jul 2022
Algorithms to estimate Shapley value feature attributions
Algorithms to estimate Shapley value feature attributions
Hugh Chen
Ian Covert
Scott M. Lundberg
Su-In Lee
TDIFAtt
103
240
0
15 Jul 2022
Beware the Rationalization Trap! When Language Model Explainability
  Diverges from our Mental Models of Language
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
Rita Sevastjanova
Mennatallah El-Assady
LRM
94
10
0
14 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
115
76
0
13 Jul 2022
Verifying Attention Robustness of Deep Neural Networks against Semantic
  Perturbations
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
S. Munakata
Caterina Urban
Haruki Yokoyama
Koji Yamamoto
Kazuki Munakata
AAML
48
4
0
13 Jul 2022
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial
  Intelligence
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. Bayan Bruss
83
15
0
12 Jul 2022
From Correlation to Causation: Formalizing Interpretable Machine
  Learning as a Statistical Process
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process
Lukas Klein
Mennatallah El-Assady
Paul F. Jäger
CML
50
1
0
11 Jul 2022
A multi-level interpretable sleep stage scoring system by infusing
  experts' knowledge into a deep network architecture
A multi-level interpretable sleep stage scoring system by infusing experts' knowledge into a deep network architecture
H. Niknazar
S. Mednick
56
5
0
11 Jul 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
86
84
0
08 Jul 2022
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
88
9
0
08 Jul 2022
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and
  Multi-Purpose Corpus of Patent Applications
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Mirac Suzgun
Luke Melas-Kyriazi
Suproteem K. Sarkar
S. Kominers
Stuart M. Shieber
119
29
0
08 Jul 2022
Calibrate to Interpret
Calibrate to Interpret
Gregory Scafarto
N. Posocco
Antoine Bonnefoy
FaML
20
4
0
07 Jul 2022
An Additive Instance-Wise Approach to Multi-class Model Interpretation
An Additive Instance-Wise Approach to Multi-class Model Interpretation
Vy Vo
Van Nguyen
Trung Le
Quan Hung Tran
Gholamreza Haffari
S. Çamtepe
Dinh Q. Phung
FAtt
134
5
0
07 Jul 2022
Is a PET all you need? A multi-modal study for Alzheimer's disease using
  3D CNNs
Is a PET all you need? A multi-modal study for Alzheimer's disease using 3D CNNs
Marla Narazani
Ignacio Sarasua
Sebastian Polsterl
A. Lizarraga
Igor Yakushev
Christian Wachinger
MedIm
51
17
0
05 Jul 2022
SESS: Saliency Enhancing with Scaling and Sliding
SESS: Saliency Enhancing with Scaling and Sliding
Osman Tursun
Akila Pemasiri
Sridha Sridharan
Clinton Fookes
23
5
0
05 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
113
0
0
04 Jul 2022
Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
  of Image Segmentation Models
Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models
Xuhong Li
Haoyi Xiong
Yi Liu
Dingfu Zhou
Zeyu Chen
Yaqing Wang
Dejing Dou
55
8
0
04 Jul 2022
Interpretable by Design: Learning Predictors by Composing Interpretable
  Queries
Interpretable by Design: Learning Predictors by Composing Interpretable Queries
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
113
24
0
03 Jul 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text
  Rationales
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
125
10
0
02 Jul 2022
Previous
123...303132...565758
Next