Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.12370
Cited By
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
27 October 2019
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
S. Feizi
AI4TS
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks"
7 / 7 papers shown
Title
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
21
4
0
17 Nov 2022
BolT: Fused Window Transformers for fMRI Time Series Analysis
H. Bedel
Irmak Sivgin
Onat Dalmaz
S. Dar
Tolga Çukur
59
54
0
23 May 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
79
0
29 Nov 2021
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
16
88
0
30 Nov 2020
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1