ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.03731
  4. Cited By
Is Attention Interpretable?

Is Attention Interpretable?

9 June 2019
Sofia Serrano
Noah A. Smith
ArXivPDFHTML

Papers citing "Is Attention Interpretable?"

50 / 165 papers shown
Title
News-based Business Sentiment and its Properties as an Economic Index
News-based Business Sentiment and its Properties as an Economic Index
Kazuhiro Seki
Y. Ikuta
Yoichi Matsubayashi
24
23
0
20 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
111
35
0
15 Oct 2021
A Framework for Rationale Extraction for Deep QA models
A Framework for Rationale Extraction for Deep QA models
Sahana Ramnath
Preksha Nema
Deep Sahni
Mitesh M. Khapra
AAML
FAtt
22
0
0
09 Oct 2021
Explaining the Attention Mechanism of End-to-End Speech Recognition
  Using Decision Trees
Explaining the Attention Mechanism of End-to-End Speech Recognition Using Decision Trees
Yuanchao Wang
Wenjing Du
Chenghao Cai
Yanyan Xu
34
1
0
08 Oct 2021
Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Nari Johnson
S. Parbhoo
A. Ross
Finale Doshi-Velez
AI4TS
27
7
0
22 Sep 2021
Automated and Explainable Ontology Extension Based on Deep Learning: A
  Case Study in the Chemical Domain
Automated and Explainable Ontology Extension Based on Deep Learning: A Case Study in the Chemical Domain
A. Memariani
Martin Glauer
Fabian Neuhaus
Till Mossakowski
Janna Hastings
36
5
0
19 Sep 2021
Is Attention Better Than Matrix Decomposition?
Is Attention Better Than Matrix Decomposition?
Zhengyang Geng
Meng-Hao Guo
Hongxu Chen
Xia Li
Ke Wei
Zhouchen Lin
62
138
0
09 Sep 2021
Attributing Fair Decisions with Attention Interventions
Attributing Fair Decisions with Attention Interventions
Ninareh Mehrabi
Umang Gupta
Fred Morstatter
Greg Ver Steeg
Aram Galstyan
32
21
0
08 Sep 2021
Counterfactual Evaluation for Explainable AI
Counterfactual Evaluation for Explainable AI
Yingqiang Ge
Shuchang Liu
Zelong Li
Shuyuan Xu
Shijie Geng
Yunqi Li
Juntao Tan
Fei Sun
Yongfeng Zhang
CML
38
14
0
05 Sep 2021
Enjoy the Salience: Towards Better Transformer-based Faithful
  Explanations with Word Salience
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Translation Error Detection as Rationale Extraction
Translation Error Detection as Rationale Extraction
M. Fomicheva
Lucia Specia
Nikolaos Aletras
21
23
0
27 Aug 2021
A Survey on Automated Fact-Checking
A Survey on Automated Fact-Checking
Zhijiang Guo
M. Schlichtkrull
Andreas Vlachos
29
459
0
26 Aug 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness,
  and Semantic Evaluation
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
31
17
0
09 Jun 2021
MERLOT: Multimodal Neural Script Knowledge Models
MERLOT: Multimodal Neural Script Knowledge Models
Rowan Zellers
Ximing Lu
Jack Hessel
Youngjae Yu
J. S. Park
Jize Cao
Ali Farhadi
Yejin Choi
VLM
LRM
33
372
0
04 Jun 2021
Do Models Learn the Directionality of Relations? A New Evaluation:
  Relation Direction Recognition
Do Models Learn the Directionality of Relations? A New Evaluation: Relation Direction Recognition
Shengfei Lyu
Xingyu Wu
Jinlong Li
Qiuju Chen
Huanhuan Chen
35
5
0
19 May 2021
Collaborative Graph Learning with Auxiliary Text for Temporal Event
  Prediction in Healthcare
Collaborative Graph Learning with Auxiliary Text for Temporal Event Prediction in Healthcare
Chang Lu
Chandan K. Reddy
Prithwish Chakraborty
Samantha Kleinberg
Yue Ning
21
57
0
16 May 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
27
37
0
06 May 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
33
20
0
26 Apr 2021
On the Sensitivity and Stability of Model Interpretations in NLP
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
19
33
0
18 Apr 2021
Flexible Instance-Specific Rationalization of NLP Models
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
VGNMN: Video-grounded Neural Module Network to Video-Grounded Language
  Tasks
VGNMN: Video-grounded Neural Module Network to Video-Grounded Language Tasks
Hung Le
Nancy F. Chen
Guosheng Lin
MLLM
28
19
0
16 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
43
48
0
20 Mar 2021
TransFG: A Transformer Architecture for Fine-grained Recognition
TransFG: A Transformer Architecture for Fine-grained Recognition
Ju He
Jieneng Chen
Shuai Liu
Adam Kortylewski
Cheng Yang
Yutong Bai
Changhu Wang
ViT
37
376
0
14 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
RECAST: Enabling User Recourse and Interpretability of Toxicity
  Detection Models with Interactive Visualization
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
Austin P. Wright
Omar Shaikh
Haekyu Park
Will Epperson
Muhammed Ahmed
Stephane Pinel
Duen Horng Chau
Diyi Yang
17
21
0
08 Feb 2021
Explaining Black-box Models for Biomedical Text Classification
Explaining Black-box Models for Biomedical Text Classification
M. Moradi
Matthias Samwald
41
21
0
20 Dec 2020
AIST: An Interpretable Attention-based Deep Learning Model for Crime
  Prediction
AIST: An Interpretable Attention-based Deep Learning Model for Crime Prediction
Yeasir Rayhan
T. Hashem
24
22
0
16 Dec 2020
Writing Polishment with Simile: Task, Dataset and A Neural Approach
Writing Polishment with Simile: Task, Dataset and A Neural Approach
Jiayi Zhang
Zhi Cui
Xiaoqiang Xia
Yalong Guo
Yanran Li
Chen Wei
Jianwei Cui
20
17
0
15 Dec 2020
Learning to Rationalize for Nonmonotonic Reasoning with Distant
  Supervision
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
28
119
0
03 Dec 2020
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
24
88
0
30 Nov 2020
Multi-document Summarization via Deep Learning Techniques: A Survey
Multi-document Summarization via Deep Learning Techniques: A Survey
Congbo Ma
W. Zhang
Mingyu Guo
Hu Wang
Quan Z. Sheng
13
126
0
10 Nov 2020
ABNIRML: Analyzing the Behavior of Neural IR Models
ABNIRML: Analyzing the Behavior of Neural IR Models
Sean MacAvaney
Sergey Feldman
Nazli Goharian
Doug Downey
Arman Cohan
23
49
0
02 Nov 2020
Towards Interpreting BERT for Reading Comprehension Based QA
Towards Interpreting BERT for Reading Comprehension Based QA
Sahana Ramnath
Preksha Nema
Deep Sahni
Mitesh M. Khapra
42
30
0
18 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
49
174
0
12 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Fine-Grained Grounding for Multimodal Speech Recognition
Fine-Grained Grounding for Multimodal Speech Recognition
Tejas Srinivasan
Ramon Sanabria
Florian Metze
Desmond Elliott
23
11
0
05 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Examining the rhetorical capacities of neural language models
Examining the rhetorical capacities of neural language models
Zining Zhu
Chuer Pan
Mohamed Abdalla
Frank Rudzicz
36
10
0
01 Oct 2020
Bridging Information-Seeking Human Gaze and Machine Reading
  Comprehension
Bridging Information-Seeking Human Gaze and Machine Reading Comprehension
J. Malmaud
R. Levy
Yevgeni Berzak
30
31
0
30 Sep 2020
Spatial Attention as an Interface for Image Captioning Models
Spatial Attention as an Interface for Image Captioning Models
P. Sadler
28
0
0
29 Sep 2020
Attention Flows: Analyzing and Comparing Attention Mechanisms in
  Language Models
Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models
Joseph F DeRose
Jiayao Wang
M. Berger
17
83
0
03 Sep 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
14
60
0
04 Aug 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
Why Attentions May Not Be Interpretable?
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei Wang
FAtt
25
56
0
10 Jun 2020
Quantifying Attention Flow in Transformers
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
60
778
0
02 May 2020
When BERT Plays the Lottery, All Tickets Are Winning
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
16
186
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Towards Transparent and Explainable Attention Models
Towards Transparent and Explainable Attention Models
Akash Kumar Mohankumar
Preksha Nema
Sharan Narasimhan
Mitesh M. Khapra
Balaji Vasan Srinivasan
Balaraman Ravindran
42
99
0
29 Apr 2020
Previous
1234
Next