Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.11218
Cited By
Attention Interpretability Across NLP Tasks
24 September 2019
Shikhar Vashishth
Shyam Upadhyay
Gaurav Singh Tomar
Manaal Faruqui
XAI
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Attention Interpretability Across NLP Tasks"
33 / 33 papers shown
Title
Selective Prompt Anchoring for Code Generation
Yuan Tian
Tianyi Zhang
96
3
0
24 Feb 2025
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
Duc Hau Nguyen
Duc Hau Nguyen
Pascale Sébillot
57
5
0
23 Jan 2025
Regularization, Semi-supervision, and Supervision for a Plausible Attention-Based Explanation
Duc Hau Nguyen
Cyrielle Mallart
Guillaume Gravier
Pascale Sébillot
65
0
0
22 Jan 2025
Dynamic Attention-Guided Context Decoding for Mitigating Context Faithfulness Hallucinations in Large Language Models
Yanwen Huang
Yong Zhang
Ning Cheng
Zhitao Li
Shaojun Wang
Jing Xiao
91
0
0
02 Jan 2025
Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task Learning
Chong Li
Shaonan Wang
Yunhao Zhang
Jiajun Zhang
Chengqing Zong
38
4
0
16 Oct 2023
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan
Nina Achache
Victor Legrand
A. Blangero
Nicolas Chesneau
26
9
0
27 Mar 2023
Predicting Hateful Discussions on Reddit using Graph Transformer Networks and Communal Context
Liam Hebert
Lukasz Golab
R. Cohen
11
8
0
10 Jan 2023
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
29
82
0
13 Oct 2022
Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks
Yuxuan Li
James L. McClelland
50
17
0
02 Oct 2022
Does Attention Mechanism Possess the Feature of Human Reading? A Perspective of Sentiment Classification Task
Leilei Zhao
Yingyi Zhang
Chengzhi Zhang
35
2
0
08 Sep 2022
Learning to Learn to Predict Performance Regressions in Production at Meta
M. Beller
Hongyu Li
V. Nair
V. Murali
Imad Ahmad
Jürgen Cito
Drew Carlson
Gareth Ari Aye
Wes Dyer
33
5
0
08 Aug 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
23
124
0
27 Jul 2022
Is Attention Interpretation? A Quantitative Assessment On Sets
Jonathan Haab
N. Deutschmann
María Rodríguez Martínez
27
7
0
26 Jul 2022
Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information
Chiyu Feng
Po-Chun Hsu
Hung-yi Lee
SSL
31
8
0
08 May 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
25
22
0
22 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
34
10
0
31 Mar 2022
Understanding microbiome dynamics via interpretable graph representation learning
K. Melnyk
Kuba Weimann
Tim Conrad
24
5
0
02 Mar 2022
Counterfactual Explanations for Models of Code
Jürgen Cito
Işıl Dillig
V. Murali
S. Chandra
AAML
LRM
32
48
0
10 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
19
44
0
20 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
111
35
0
15 Oct 2021
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
27
37
0
06 May 2021
Dodrio: Exploring Transformer Models with Interactive Visualization
Zijie J. Wang
Robert Turko
Duen Horng Chau
34
35
0
26 Mar 2021
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
49
173
0
12 Oct 2020
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
Explainable CNN-attention Networks (C-Attention Network) for Automated Detection of Alzheimer's Disease
Ning Wang
Mingxuan Chen
K. P. Subbalakshmi
20
22
0
25 Jun 2020
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
60
778
0
02 May 2020
Hard-Coded Gaussian Attention for Neural Machine Translation
Weiqiu You
Simeng Sun
Mohit Iyyer
25
67
0
02 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
22
567
0
07 Apr 2020
Harnessing the linguistic signal to predict scalar inferences
Sebastian Schuster
Yuxing Chen
Judith Degen
19
33
0
31 Oct 2019
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,929
0
17 Aug 2015
1