Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1606.04155
Cited By
v1
v2 (latest)
Rationalizing Neural Predictions
13 June 2016
Tao Lei
Regina Barzilay
Tommi Jaakkola
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Rationalizing Neural Predictions"
50 / 327 papers shown
Title
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
77
784
0
16 Nov 2020
DoLFIn: Distributions over Latent Features for Interpretability
Phong Le
Willem H. Zuidema
FAtt
30
0
0
10 Nov 2020
Weakly- and Semi-supervised Evidence Extraction
Danish Pruthi
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
83
23
0
03 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
356
182
0
24 Oct 2020
Coherent Hierarchical Multi-Label Classification Networks
Eleonora Giunchiglia
Thomas Lukasiewicz
AILaw
219
100
0
20 Oct 2020
Explaining and Improving Model Behavior with k Nearest Neighbor Representations
Nazneen Rajani
Ben Krause
Wengpeng Yin
Tong Niu
R. Socher
Caiming Xiong
FAtt
72
34
0
18 Oct 2020
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
116
179
0
12 Oct 2020
Weakly Supervised Medication Regimen Extraction from Medical Conversations
Dhruvesh Patel
Sandeep Konam
Sai P. Selvaraj
MedIm
44
9
0
11 Oct 2020
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Piyawat Lertvittayakumjorn
Lucia Specia
Francesca Toni
65
54
0
10 Oct 2020
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
79
49
0
09 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
103
26
0
07 Oct 2020
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
110
26
0
04 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
122
66
0
01 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
112
226
0
25 Sep 2020
Improving Robustness and Generality of NLP Models Using Disentangled Representations
Jiawei Wu
Xiaoya Li
Xiang Ao
Yuxian Meng
Leilei Gan
Jiwei Li
OOD
DRL
43
11
0
21 Sep 2020
Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Ming Fan
Wenying Wei
Xiaofei Xie
Yang Liu
X. Guan
Ting Liu
FAtt
AAML
102
38
0
13 Aug 2020
Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring
Haoran Zhang
Diane Litman
78
10
0
04 Aug 2020
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
75
61
0
04 Aug 2020
Looking in the Right place for Anomalies: Explainable AI through Automatic Location Learning
Satyananda Kashyap
Alexandros Karargyris
Joy T. Wu
Yaniv Gur
Arjun Sharma
Ken C. L. Wong
Mehdi Moradi
Tanveer Syeda-Mahmood
OOD
51
13
0
02 Aug 2020
Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification
Cristina Garbacea
Mengtian Guo
Samuel Carton
Qiaozhu Mei
60
28
0
31 Jul 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
86
71
0
23 Jul 2020
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
119
295
0
26 Jun 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
153
607
0
26 Jun 2020
SEAL: Segment-wise Extractive-Abstractive Long-form Text Summarization
Yao-Min Zhao
Mohammad Saleh
Peter J. Liu
RALM
102
25
0
18 Jun 2020
Gradient Estimation with Stochastic Softmax Tricks
Max B. Paulus
Dami Choi
Daniel Tarlow
Andreas Krause
Chris J. Maddison
BDL
106
88
0
15 Jun 2020
XGNN: Towards Model-Level Explanations of Graph Neural Networks
Haonan Yuan
Jiliang Tang
Helen Zhou
Shuiwang Ji
116
402
0
03 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
83
106
0
01 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
118
271
0
29 May 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
104
175
0
14 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
69
18
0
05 May 2020
Evaluating Explanation Methods for Neural Machine Translation
Jierui Li
Lemao Liu
Huayang Li
Guanlin Li
Guoping Huang
Shuming Shi
51
23
0
04 May 2020
Rationalizing Medical Relation Prediction from Corpus-level Statistics
Zhen Wang
Jennifer A Lee
Simon M. Lin
Huan Sun
OOD
37
4
0
02 May 2020
ESPRIT: Explaining Solutions to Physical Reasoning Tasks
Nazneen Rajani
Rui Zhang
Y. Tan
Stephan Zheng
Jeremy C. Weiss
Aadit Vyas
Abhijit Gupta
Caiming Xiong
R. Socher
Dragomir R. Radev
ReLM
LRM
75
21
0
02 May 2020
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape
Mandar Joshi
John Thickstun
Hannaneh Hajishirzi
Luke Zettlemoyer
89
101
0
01 May 2020
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
97
165
0
30 Apr 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
Michael Schlichtkrull
Wilker Aziz
Ivan Titov
76
92
0
30 Apr 2020
Fact or Fiction: Verifying Scientific Claims
David Wadden
Shanchuan Lin
Kyle Lo
Lucy Lu Wang
Madeleine van Zuylen
Arman Cohan
Hannaneh Hajishirzi
HAI
208
466
0
30 Apr 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
120
382
0
30 Apr 2020
Towards Transparent and Explainable Attention Models
Akash Kumar Mohankumar
Preksha Nema
Sharan Narasimhan
Mitesh M. Khapra
Balaji Vasan Srinivasan
Balaraman Ravindran
86
102
0
29 Apr 2020
The Explanation Game: Towards Prediction Explainability through Sparse Communication
Marcos Vinícius Treviso
André F. T. Martins
FAtt
70
3
0
28 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
70
18
0
27 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
117
95
0
04 Apr 2020
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
249
207
0
22 Mar 2020
Harnessing Explanations to Bridge AI and Humans
Vivian Lai
Samuel Carton
Chenhao Tan
59
5
0
16 Mar 2020
Explainable Deep Modeling of Tabular Data using TableGraphNet
G. Terejanu
Jawad Chowdhury
Rezaur Rashid
Asif J. Chowdhury
LMTD
FAtt
21
3
0
12 Feb 2020
Multi-Objective Molecule Generation using Interpretable Substructures
Wengong Jin
Regina Barzilay
Tommi Jaakkola
98
23
0
08 Feb 2020
Description Based Text Classification with Reinforcement Learning
Duo Chai
Wei Wu
Qinghong Han
Leilei Gan
Jiwei Li
VLM
181
68
0
08 Feb 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
107
143
0
14 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
96
318
0
08 Jan 2020
Text Classification for Azerbaijani Language Using Machine Learning and Embedding
U. Suleymanov
Behnam Kiani Kalejahi
Elkhan Amrahov
Rashid Badirkhanli
30
9
0
26 Dec 2019
Previous
1
2
3
4
5
6
7
Next