Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.12370
Cited By
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
27 October 2019
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
Soheil Feizi
AI4TS
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks"
24 / 24 papers shown
Title
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
77
2
0
22 Sep 2024
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
100
2
0
17 Jun 2024
Applications of interpretable deep learning in neuroimaging: a comprehensive review
Lindsay Munroe
Mariana da Silva
Faezeh Heidari
I. Grigorescu
Simon Dahan
E. C. Robinson
Maria Deprez
Po-Wah So
AI4CE
56
7
0
30 May 2024
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Zichuan Liu
Yingying Zhang
Tianchun Wang
Zefan Wang
Dongsheng Luo
...
Min Wu
Yi Wang
Chunlin Chen
Lunting Fan
Qingsong Wen
125
11
0
16 Jan 2024
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
57
3
0
09 Nov 2023
Hiding Backdoors within Event Sequence Data via Poisoning Attacks
Elizaveta Kovtun
A. Ermilova
Dmitry Berestnev
Alexey Zaytsev
SILM
AAML
66
1
0
20 Aug 2023
Finding Short Signals in Long Irregular Time Series with Continuous-Time Attention Policy Networks
Thomas Hartvigsen
Jidapa Thadajarassiri
Xiangnan Kong
Elke A. Rundensteiner
AI4TS
38
2
0
08 Feb 2023
Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement
Yuante Li
Xin-xin Lu
Yaqing Wang
De-Yu Dou
DiffM
AI4TS
97
105
0
08 Jan 2023
Robust representations of oil wells' intervals via sparse attention mechanism
Alina Rogulina
N. Baramiia
Valerii Kornilov
Sergey Petrakov
Alexey Zaytsev
AI4TS
OOD
67
1
0
29 Dec 2022
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
94
4
0
17 Nov 2022
WindowSHAP: An Efficient Framework for Explaining Time-series Classifiers based on Shapley Values
Amin Nayebi
Sindhu Tipirneni
Chandan K. Reddy
Brandon Foreman
V. Subbian
FAtt
AI4TS
51
22
0
11 Nov 2022
BolT: Fused Window Transformers for fMRI Time Series Analysis
H. Bedel
Irmak Sivgin
Onat Dalmaz
S. Dar
Tolga Çukur
130
59
0
23 May 2022
Core Risk Minimization using Salient ImageNet
Sahil Singla
Mazda Moayeri
Soheil Feizi
88
14
0
28 Mar 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
Soheil Feizi
FAtt
105
83
0
29 Nov 2021
Scrutinizing XAI using linear ground-truth data with suppressor variables
Rick Wilming
Céline Budding
K. Müller
Stefan Haufe
FAtt
61
26
0
14 Nov 2021
Salient ImageNet: How to discover spurious features in Deep Learning?
Sahil Singla
Soheil Feizi
AAML
VLM
105
120
0
08 Oct 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
104
81
0
09 Jun 2021
Feature Importance Explanations for Temporal Black-Box Models
Akshay Sood
M. Craven
FAtt
OOD
123
16
0
23 Feb 2021
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
85
97
0
30 Nov 2020
Benchmarking Deep Learning Interpretability in Time Series Predictions
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
Soheil Feizi
XAI
AI4TS
FAtt
72
173
0
26 Oct 2020
Spatiotemporal Attention for Multivariate Time Series Prediction and Interpretation
Tryambak Gangopadhyay
Sin Yong Tan
Zhanhong Jiang
Rui Meng
Soumik Sarkar
AI4TS
103
46
0
11 Aug 2020
ProtoryNet - Interpretable Text Classification Via Prototype Trajectories
Dat Hong
Tong Wang
Stephen S. Baek
AI4TS
46
0
0
03 Jul 2020
Improving the Interpretability of fMRI Decoding using Deep Neural Networks and Adversarial Robustness
Patrick McClure
Dustin Moraczewski
K. Lam
Adam G. Thomas
Francisco Pereira
FAtt
AAML
36
4
0
23 Apr 2020
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
134
14
0
05 Mar 2020
1