Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.10470
Cited By
v1
v2 (latest)
Interpreting Deep Learning Models in Natural Language Processing: A Review
20 October 2021
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpreting Deep Learning Models in Natural Language Processing: A Review"
50 / 171 papers shown
Title
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
130
638
0
08 Nov 2019
Ordered Memory
Songlin Yang
Shawn Tan
Seyedarian Hosseini
Zhouhan Lin
Alessandro Sordoni
Aaron Courville
49
23
0
29 Oct 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
264
10,851
0
29 Oct 2019
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
119
145
0
29 Oct 2019
A Game Theoretic Approach to Class-wise Selective Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
53
61
0
28 Oct 2019
A Unified MRC Framework for Named Entity Recognition
Xiaoya Li
Jingrong Feng
Yuxian Meng
Qinghong Han
Leilei Gan
Jiwei Li
80
637
0
25 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
470
20,317
0
23 Oct 2019
Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering
Ekaterina Arkhangelskaia
Sourav Dutta
AIMat
35
10
0
14 Oct 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
373
6,467
0
26 Sep 2019
Attention Interpretability Across NLP Tasks
Shikhar Vashishth
Shyam Upadhyay
Gaurav Singh Tomar
Manaal Faruqui
XAI
MILM
93
176
0
24 Sep 2019
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
68
138
0
19 Sep 2019
Learning to Deceive with Attention-Based Explanations
Danish Pruthi
Mansi Gupta
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
80
193
0
17 Sep 2019
Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning
Yichen Jiang
Joey Tianyi Zhou
ReLM
LRM
50
72
0
12 Sep 2019
Designing and Interpreting Probes with Control Tasks
John Hewitt
Percy Liang
79
537
0
08 Sep 2019
Interpretable Word Embeddings via Informative Priors
Miriam Hurtado Bodell
Martin Arvidsson
Måns Magnusson
69
18
0
03 Sep 2019
Revealing the Dark Secrets of BERT
Olga Kovaleva
Alexey Romanov
Anna Rogers
Anna Rumshisky
38
554
0
21 Aug 2019
Visualizing and Understanding the Effectiveness of BERT
Y. Hao
Li Dong
Furu Wei
Ke Xu
140
185
0
15 Aug 2019
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
122
914
0
13 Aug 2019
On Identifiability in Transformers
Gino Brunner
Yang Liu
Damian Pascual
Oliver Richter
Massimiliano Ciaramita
Roger Wattenhofer
ViT
65
189
0
12 Aug 2019
SpanBERT: Improving Pre-training by Representing and Predicting Spans
Mandar Joshi
Danqi Chen
Yinhan Liu
Daniel S. Weld
Luke Zettlemoyer
Omer Levy
153
1,967
0
24 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
112
1,451
0
17 Jul 2019
The Price of Interpretability
Dimitris Bertsimas
A. Delarue
Patrick Jaillet
Sébastien Martin
50
34
0
08 Jul 2019
Inducing Syntactic Trees from BERT Representations
Rudolf Rosa
David Marecek
MILM
25
22
0
27 Jun 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
236
8,447
0
19 Jun 2019
EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
Yue Dong
Zichao Li
Mehdi Rezagholizadeh
Jackie C.K. Cheung
68
160
0
19 Jun 2019
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
Antoine Bosselut
Hannah Rashkin
Maarten Sap
Chaitanya Malaviya
Asli Celikyilmaz
Yejin Choi
82
912
0
12 Jun 2019
Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension
Yichen Jiang
Nitish Joshi
Yen-Chun Chen
Joey Tianyi Zhou
RALM
60
39
0
12 Jun 2019
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
226
1,602
0
11 Jun 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Analyzing the Structure of Attention in a Transformer Language Model
Jesse Vig
Yonatan Belinkov
68
370
0
07 Jun 2019
Visualizing and Measuring the Geometry of BERT
Andy Coenen
Emily Reif
Ann Yuan
Been Kim
Adam Pearce
F. Viégas
Martin Wattenberg
MILM
78
418
0
06 Jun 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLM
LRM
84
566
0
06 Jun 2019
Open Sesame: Getting Inside BERT's Linguistic Knowledge
Yongjie Lin
Y. Tan
Robert Frank
60
287
0
04 Jun 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
56
21
0
28 May 2019
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
Mariya Toneva
Leila Wehbe
MILM
AI4CE
77
230
0
28 May 2019
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned
Elena Voita
David Talbot
F. Moiseev
Rico Sennrich
Ivan Titov
117
1,146
0
23 May 2019
Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Kosuke Nishida
Kyosuke Nishida
Masaaki Nagata
Atsushi Otsuka
Itsumi Saito
Hisako Asano
J. Tomita
RALM
57
102
0
21 May 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
82
214
0
20 May 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
140
1,478
0
15 May 2019
Unified Language Model Pre-training for Natural Language Understanding and Generation
Li Dong
Nan Yang
Wenhui Wang
Furu Wei
Xiaodong Liu
Yu Wang
Jianfeng Gao
M. Zhou
H. Hon
ELM
AI4CE
227
1,560
0
08 May 2019
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
126
966
0
07 May 2019
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
François Fleuret
MILM
FAtt
77
276
0
02 May 2019
Analytical Methods for Interpretable Ultradense Word Embeddings
Philipp Dufter
Hinrich Schütze
68
25
0
18 Apr 2019
Unsupervised Recurrent Neural Network Grammars
Yoon Kim
Alexander M. Rush
Lei Yu
A. Kuncoro
Chris Dyer
Gábor Melis
LRM
RALM
SSL
81
115
0
07 Apr 2019
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
Andrew Drozdov
Pat Verga
Mohit Yadav
Mohit Iyyer
Andrew McCallum
52
123
0
03 Apr 2019
Identification, Interpretability, and Bayesian Word Embeddings
Adam M. Lauretig
BDL
27
11
0
02 Apr 2019
Neural Vector Conceptualization for Word Vector Space Interpretation
Robert Schwarzenberg
Lisa Raithel
David Harbecke
LLMSV
VLM
35
9
0
02 Apr 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
148
1,328
0
26 Feb 2019
Multi-Task Deep Neural Networks for Natural Language Understanding
Xiaodong Liu
Pengcheng He
Weizhu Chen
Jianfeng Gao
AI4CE
139
1,273
0
31 Jan 2019
Assessing BERT's Syntactic Abilities
Yoav Goldberg
73
496
0
16 Jan 2019
Previous
1
2
3
4
Next