Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.03894
Cited By
Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference
12 August 2018
Reza Ghaeini
Xiaoli Z. Fern
Prasad Tadepalli
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference"
50 / 54 papers shown
Title
Hierarchical Attention Network for Interpretable ECG-based Heart Disease Classification
Mario Padilla Rodriguez
Mohamed Nafea
28
0
0
25 Mar 2025
Fake News Detection After LLM Laundering: Measurement and Explanation
Rupak Kumar Das
Jonathan Dodge
106
0
0
29 Jan 2025
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
Duc Hau Nguyen
Duc Hau Nguyen
Pascale Sébillot
65
5
0
23 Jan 2025
On Explaining with Attention Matrices
Omar Naim
Nicholas Asher
39
1
0
24 Oct 2024
Using Interpretation Methods for Model Enhancement
Zhuo Chen
Chengyue Jiang
Kewei Tu
26
2
0
02 Apr 2024
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
41
8
0
13 Jan 2023
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Ruixuan Tang
Hanjie Chen
Yangfeng Ji
AAML
FAtt
32
2
0
10 Dec 2022
Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts
Santosh T.Y.S.S
Shanshan Xu
O. Ichim
Matthias Grabmair
39
26
0
25 Oct 2022
More Interpretable Graph Similarity Computation via Maximum Common Subgraph Inference
Zixun Lan
Binjie Hong
Ye Ma
Fei Ma
32
14
0
09 Aug 2022
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
30
42
0
26 Jul 2022
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
35
19
0
07 Jun 2022
STRATA: Word Boundaries & Phoneme Recognition From Continuous Urdu Speech using Transfer Learning, Attention, & Data Augmentation
Saad Naeem
Omer Beg
16
0
0
16 Apr 2022
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
Yoon Kim
James R. Glass
Tianxing He
45
5
0
02 Mar 2022
Toward a traceable, explainable, and fairJD/Resume recommendation system
Amine Barrak
Bram Adams
Amal Zouaq
21
2
0
02 Feb 2022
POTATO: exPlainable infOrmation exTrAcTion framewOrk
Adam Kovacs
Kinga Gémes
Eszter Iklódi
Gábor Recski
38
4
0
31 Jan 2022
An empirical user-study of text-based nonverbal annotation systems for human-human conversations
Joshua Y. Kim
K. Yacef
19
1
0
30 Dec 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
24
45
0
20 Oct 2021
Reason induced visual attention for explainable autonomous driving
Sikai Chen
Jiqian Dong
Runjia Du
Yujie Li
Samuel Labi
34
1
0
11 Oct 2021
Do Models Learn the Directionality of Relations? A New Evaluation: Relation Direction Recognition
Shengfei Lyu
Xingyu Wu
Jinlong Li
Qiuju Chen
Huanhuan Chen
35
5
0
19 May 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
32
37
0
06 May 2021
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
Aaron Chan
Lyne Tchapmi
Bo Long
Soumya Sanyal
Tanishq Gupta
Xiang Ren
ReLM
LRM
32
11
0
18 Apr 2021
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
31
7
0
13 Apr 2021
Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Hanjie Chen
Song Feng
Jatin Ganhotra
H. Wan
Chulaka Gunasekara
Sachindra Joshi
Yangfeng Ji
29
18
0
09 Apr 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
318
0
19 Mar 2021
Enhanced Aspect-Based Sentiment Analysis Models with Progressive Self-supervised Attention Learning
Jinsong Su
Jialong Tang
Hui Jiang
Ziyao Lu
Yubin Ge
Linfeng Song
Deyi Xiong
Le Sun
Jiebo Luo
6
48
0
05 Mar 2021
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains
Eyal Shnarch
Leshem Choshen
Guy Moshkowich
Noam Slonim
R. Aharonov
94
9
0
19 Oct 2020
Understanding Neural Abstractive Summarization Models via Uncertainty
Jiacheng Xu
Shrey Desai
Greg Durrett
UQLM
14
47
0
15 Oct 2020
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Yuxian Meng
Chun Fan
Zijun Sun
Eduard H. Hovy
Fei Wu
Jiwei Li
FAtt
24
10
0
14 Oct 2020
Structured Self-Attention Weights Encode Semantics in Sentiment Analysis
Zhengxuan Wu
Thanh-Son Nguyen
Desmond C. Ong
MILM
26
18
0
10 Oct 2020
DNN2LR: Interpretation-inspired Feature Crossing for Real-world Tabular Data
Zhaocheng Liu
Qiang Liu
Haoli Zhang
Yuntian Chen
19
12
0
22 Aug 2020
Deep Active Learning by Model Interpretability
Qiang Liu
Zhaocheng Liu
Xiaofang Zhu
Yeliang Xiu
24
4
0
23 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
584
0
26 Jun 2020
Finding Experts in Transformer Models
Xavier Suau
Luca Zappella
N. Apostoloff
15
31
0
15 May 2020
Corpus-level and Concept-based Explanations for Interpretable Document Classification
Tian Shi
Xuchao Zhang
Ping Wang
Chandan K. Reddy
FAtt
24
8
0
24 Apr 2020
Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
Y. Hao
Li Dong
Furu Wei
Ke Xu
ViT
22
215
0
23 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
48
571
0
07 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
38
92
0
04 Apr 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
35
143
0
10 Feb 2020
A Neural Approach to Discourse Relation Signal Detection
Amir Zeldes
Yang Liu
11
6
0
08 Jan 2020
Understanding Multi-Head Attention in Abstractive Summarization
Joris Baan
Maartje ter Hoeve
M. V. D. Wees
Anne Schuth
Maarten de Rijke
AAML
27
23
0
10 Nov 2019
Interrogating the Explanatory Power of Attention in Neural Machine Translation
Pooya Moradi
Nishant Kambhatla
Anoop Sarkar
21
16
0
30 Sep 2019
Attention Interpretability Across NLP Tasks
Shikhar Vashishth
Shyam Upadhyay
Gaurav Singh Tomar
Manaal Faruqui
XAI
MILM
42
176
0
24 Sep 2019
On Model Stability as a Function of Random Seed
Pranava Madhyastha
Dhruv Batra
45
62
0
23 Sep 2019
Human-grounded Evaluations of Explanation Methods for Text Classification
Piyawat Lertvittayakumjorn
Francesca Toni
FAtt
15
67
0
29 Aug 2019
Understanding Memory Modules on Learning Simple Algorithms
Kexin Wang
Yu Zhou
Shaonan Wang
Jiajun Zhang
Chengqing Zong
34
0
0
01 Jul 2019
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Shuoyang Ding
Hainan Xu
Philipp Koehn
33
55
0
25 Jun 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
45
673
0
09 Jun 2019
An Empirical Study of Spatial Attention Mechanisms in Deep Networks
Xizhou Zhu
Dazhi Cheng
Zheng-Wei Zhang
Stephen Lin
Jifeng Dai
43
403
0
11 Apr 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
31
1,301
0
26 Feb 2019
1
2
Next