Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.00605
Cited By
v1
v2
v3
v4 (latest)
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
1 April 2019
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Github (76★)
Papers citing
"Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks"
26 / 26 papers shown
Title
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
Seongun Kim
Sol A Kim
Geonhyeong Kim
Enver Menadjiev
Chanwoo Lee
Seongwook Chung
Nari Kim
Jaesik Choi
163
0
0
15 May 2025
Probing Network Decisions: Capturing Uncertainties and Unveiling Vulnerabilities Without Label Information
Youngju Joung
Sehyun Lee
Jaesik Choi
AAML
86
1
0
12 Mar 2025
Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance
Jung-Ho Hong
Woo-Jeoung Nam
Kyu-Sung Jeon
Seong-Whan Lee
52
3
0
21 Dec 2023
Explaining the Decisions of Deep Policy Networks for Robotic Manipulations
Seongun Kim
Jaesik Choi
48
4
0
30 Oct 2023
Multiple Different Black Box Explanations for Image Classifiers
Hana Chockler
D. A. Kelly
Daniel Kroening
FAtt
142
0
0
25 Sep 2023
Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy
Jiwoo Hong
Yejin Cho
Jaemin Jung
Jiyoung Han
James Thorne
72
7
0
05 Apr 2023
Interpretable Diabetic Retinopathy Diagnosis based on Biomarker Activation Map
P. Zang
T. Hormel
Jie Wang
Yukun Guo
Steven T. Bailey
C. Flaxel
David Huang
T. Hwang
Yali Jia
MedIm
79
8
0
13 Dec 2022
Generalizability Analysis of Graph-based Trajectory Predictor with Vectorized Representation
Juanwu Lu
Wei Zhan
Masayoshi Tomizuka
Yeping Hu
74
6
0
06 Aug 2022
Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods
Ricards Marcinkevics
Ece Ozkan
Julia E. Vogt
97
19
0
26 Jul 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
107
38
0
02 Jun 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
186
423
0
20 Jan 2022
Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Joonhyung Park
J. Yang
Jinwoo Shin
Sung Ju Hwang
Eunho Yang
70
24
0
16 Dec 2021
Learning a Weight Map for Weakly-Supervised Localization
Tal Shaharabany
Lior Wolf
WSOL
SSL
96
1
0
28 Nov 2021
Finding Representative Interpretations on Convolutional Neural Networks
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
J. Pei
Yong Zhang
Lanjun Wang
FAtt
SSL
HAI
70
6
0
13 Aug 2021
Explanatory Pluralism in Explainable AI
Yiheng Yao
XAI
54
4
0
26 Jun 2021
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
56
10
0
29 Mar 2021
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Hila Chefer
Shir Gur
Lior Wolf
ViT
118
328
0
29 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
84
344
0
19 Mar 2021
Explanations for Occluded Images
Hana Chockler
Daniel Kroening
Youcheng Sun
150
21
0
05 Mar 2021
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
S. Sattarzadeh
M. Sudhakar
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
FAtt
67
39
0
15 Feb 2021
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
152
681
0
17 Dec 2020
Visualization of Supervised and Self-Supervised Neural Networks via Attribution Guided Factorization
Shir Gur
Ameen Ali
Lior Wolf
FAtt
78
37
0
03 Dec 2020
Attribution Preservation in Network Compression for Reliable Network Interpretation
Geondo Park
J. Yang
Sung Ju Hwang
Eunho Yang
57
5
0
28 Oct 2020
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OOD
FAtt
94
24
0
05 Aug 2020
Interpretation of Deep Temporal Representations by Selective Visualization of Internally Activated Nodes
Sohee Cho
Ginkyeng Lee
Wonjoon Chang
Jaesik Choi
73
16
0
27 Apr 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
118
132
0
20 Dec 2019
1