Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.00780
Cited By
Full-Gradient Representation for Neural Network Visualization
2 May 2019
Suraj Srinivas
F. Fleuret
MILM
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Full-Gradient Representation for Neural Network Visualization"
50 / 62 papers shown
Title
Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI
Mahdi Alehdaghi
Rajarshi Bhattacharya
Pourya Shamsolmoali
Rafael M. O. Cruz
Maguelonne Heritier
Eric Granger
38
0
0
16 Apr 2025
Flip Learning: Weakly Supervised Erase to Segment Nodules in Breast Ultrasound
Yuhao Huang
Ao Chang
Haoran Dou
X. Tao
Xinrui Zhou
...
Ruobing Huang
Alejandro F Frangi
Lingyun Bao
Xin Yang
Dong Ni
87
1
0
26 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
GRAPHITE: Graph-Based Interpretable Tissue Examination for Enhanced Explainability in Breast Cancer Histopathology
Raktim Kumar Mondol
Ewan K. A. Millar
Peter H. Graham
Lois Browne
Arcot Sowmya
Erik H. W. Meijering
46
0
0
08 Jan 2025
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
47
0
0
10 Oct 2024
Designing Concise ConvNets with Columnar Stages
Ashish Kumar
Jaesik Park
MQ
29
0
0
05 Oct 2024
Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies
Ruiyu Wang
Zheyu Zhuang
Shutong Jin
Nils Ingelhag
Danica Kragic
Florian T. Pokorny
31
0
0
30 Sep 2024
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
48
3
0
27 Jul 2024
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
Junseo Park
Hyeryung Jang
78
0
0
17 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
Characterizing Disparity Between Edge Models and High-Accuracy Base Models for Vision Tasks
Zhenyu Wang
S. Nirjon
32
0
0
13 Jul 2024
Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition
Sangyu Han
Yearim Kim
Nojun Kwak
AAML
29
1
0
25 Jan 2024
CAManim: Animating end-to-end network activation maps
Emily Kaczmarek
Olivier X. Miguel
Alexa C. Bowie
R. Ducharme
Alysha L. J. Dingwall-Harvey
S. Hawken
Christine M. Armour
Mark C. Walker
Kevin Dick
HAI
26
1
0
19 Dec 2023
Rethinking Class Activation Maps for Segmentation: Revealing Semantic Information in Shallow Layers by Reducing Noise
Hangcheng Dong
Yuhao Jiang
Yingyan Huang
Jing-Xiao Liao
Bingguo Liu
Dong Ye
Guodong Liu
18
1
0
04 Aug 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
29
6
0
27 Jul 2023
Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation
Ruitao Xie
Jingbang Chen
Limai Jiang
Ru Xiao
Yi-Lun Pan
Yunpeng Cai
GAN
MedIm
24
0
0
12 Jun 2023
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
35
9
0
20 Jan 2023
Negative Flux Aggregation to Estimate Feature Attributions
X. Li
Deng Pan
Chengyin Li
Yao Qiang
D. Zhu
FAtt
8
6
0
17 Jan 2023
Hierarchical Dynamic Masks for Visual Explanation of Neural Networks
Yitao Peng
Longzhen Yang
Yihang Liu
Lianghua He
FAtt
11
3
0
12 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
39
2
0
03 Jan 2023
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
Diffusion Visual Counterfactual Explanations
Maximilian Augustin
Valentyn Boreiko
Francesco Croce
Matthias Hein
DiffM
BDL
32
68
0
21 Oct 2022
MaskTune: Mitigating Spurious Correlations by Forcing to Explore
Saeid Asgari Taghanaki
Aliasghar Khani
Fereshte Khani
A. Gholami
Linh-Tam Tran
Ali Mahdavi-Amiri
Ghassan Hamarneh
AAML
41
45
0
30 Sep 2022
Sequential Attention for Feature Selection
T. Yasuda
M. Bateni
Lin Chen
Matthew Fahrbach
Gang Fu
Vahab Mirrokni
39
11
0
29 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
29
1
0
19 Sep 2022
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System
Xin Li
Yao Qiang
Chengyin Li
Sijia Liu
D. Zhu
OOD
MedIm
31
4
0
09 Sep 2022
Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods
Ricards Marcinkevics
Ece Ozkan
Julia E. Vogt
25
18
0
26 Jul 2022
Backdoor Attacks on Vision Transformers
Akshayvarun Subramanya
Aniruddha Saha
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
ViT
AAML
18
16
0
16 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
24
6
0
06 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
32
37
0
02 Jun 2022
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
25
2
0
22 May 2022
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
42
85
0
20 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
30
26
0
16 May 2022
Self-Supervised Learning for Invariant Representations from Multi-Spectral and SAR Images
P. Jain
Bianca Schoen-Phelan
R. Ross
27
32
0
04 May 2022
XAI for Transformers: Better Explanations through Conservative Propagation
Ameen Ali
Thomas Schnake
Oliver Eberle
G. Montavon
Klaus-Robert Muller
Lior Wolf
FAtt
15
89
0
15 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
53
15
0
23 Jan 2022
Explaining neural network predictions of material strength
Ian Palmer
T. Nathan Mundhenk
B. J. Gallagher
Yong Han
21
2
0
05 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
22
11
0
11 Oct 2021
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
20
21
0
01 Oct 2021
Saliency Guided Experience Packing for Replay in Continual Learning
Gobinda Saha
Kaushik Roy
VLM
KELM
CLL
94
21
0
10 Sep 2021
PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs
V. Kamakshi
Uday Gupta
N. C. Krishnan
16
18
0
31 Aug 2021
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAtt
XAI
32
12
0
24 Jun 2021
IA-RED
2
^2
2
: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Rameswar Panda
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
39
153
0
23 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
37
65
0
23 Jun 2021
Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification
Sangjoon Park
Gwanghyun Kim
Y. Oh
J. Seo
Sang Min Lee
Jin Hwan Kim
Sungjun Moon
Jae-Kwang Lim
Jong Chul Ye
ViT
MedIm
48
97
0
15 Apr 2021
Shapley Explanation Networks
Rui Wang
Xiaoqian Wang
David I. Inouye
TDI
FAtt
19
44
0
06 Apr 2021
1
2
Next