Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.09379
Cited By
Staying True to Your Word: (How) Can Attention Become Explanation?
19 May 2020
Martin Tutek
Jan Snajder
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Staying True to Your Word: (How) Can Attention Become Explanation?"
8 / 8 papers shown
Title
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
Duc Hau Nguyen
Duc Hau Nguyen
Pascale Sébillot
54
5
0
23 Jan 2025
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
24
0
0
15 Nov 2022
Understanding Interlocking Dynamics of Cooperative Rationalization
Mo Yu
Yang Zhang
Shiyu Chang
Tommi Jaakkola
20
41
0
26 Oct 2021
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
24
37
0
06 May 2021
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
49
173
0
12 Oct 2020
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives
Elena Voita
Rico Sennrich
Ivan Titov
207
181
0
03 Sep 2019
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1