Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.03384
Cited By
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
7 October 2020
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision"
46 / 46 papers shown
Title
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
54
162
0
30 Apr 2020
Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training
Joe Stacey
Pasquale Minervini
Haim Dubossarsky
Sebastian Riedel
Tim Rocktaschel
AI4CE
30
8
0
16 Apr 2020
Explaining Question Answering Models through Text Generation
Veronica Latcinnik
Jonathan Berant
LRM
69
51
0
12 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
66
584
0
07 Apr 2020
Evaluating Models' Local Decision Boundaries via Contrast Sets
Matt Gardner
Yoav Artzi
Victoria Basmova
Jonathan Berant
Ben Bogin
...
Sanjay Subramanian
Reut Tsarfaty
Eric Wallace
Ally Zhang
Ben Zhou
ELM
58
84
0
06 Apr 2020
A Primer in BERTology: What we know about how BERT works
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
57
1,478
0
27 Feb 2020
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches
Kacper Sokol
Peter A. Flach
XAI
65
301
0
11 Dec 2019
Neural Module Networks for Reasoning over Text
Nitish Gupta
Kevin Lin
Dan Roth
Sameer Singh
Matt Gardner
NAI
ReLM
LRM
49
131
0
10 Dec 2019
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
Vikas Yadav
Steven Bethard
Mihai Surdeanu
93
76
0
17 Nov 2019
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
83
632
0
08 Nov 2019
Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents
Ming Tu
Kevin Huang
Guangtao Wang
Jing-ling Huang
Xiaodong He
Bowen Zhou
RALM
36
144
0
01 Nov 2019
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
108
143
0
29 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
36
61
0
04 Oct 2019
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
46
137
0
19 Sep 2019
Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning
Yichen Jiang
Joey Tianyi Zhou
ReLM
LRM
34
72
0
12 Sep 2019
What do Deep Networks Like to Read?
Jonas Pfeiffer
Aishwarya Kamath
Iryna Gurevych
Sebastian Ruder
32
3
0
10 Sep 2019
Towards Debiasing Fact Verification Models
Tal Schuster
Darsh J. Shah
Yun Jie Serene Yeo
Daniel Filizzola
Enrico Santus
Regina Barzilay
74
210
0
14 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
390
24,160
0
26 Jul 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
72
679
0
09 Jun 2019
Compositional Questions Do Not Necessitate Multi-hop Reasoning
Sewon Min
Eric Wallace
Sameer Singh
Matt Gardner
Hannaneh Hajishirzi
Luke Zettlemoyer
42
149
0
07 Jun 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLM
LRM
54
561
0
06 Jun 2019
Do Human Rationales Improve Machine Explanations?
Julia Strout
Ye Zhang
Raymond J. Mooney
35
57
0
31 May 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
59
213
0
20 May 2019
Dynamically Fused Graph Network for Multi-hop Reasoning
Yunxuan Xiao
Yanru Qu
Lin Qiu
Hao Zhou
Lei Li
Weinan Zhang
Yong Yu
56
191
0
16 May 2019
Understanding Dataset Design Choices for Multi-hop Reasoning
Jifan Chen
Greg Durrett
LRM
37
97
0
27 Apr 2019
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
Afra Alishahi
Grzegorz Chrupała
Tal Linzen
NAI
MILM
52
64
0
05 Apr 2019
Inferring Which Medical Treatments Work from Reports of Clinical Trials
Eric P. Lehman
Jay DeYoung
Regina Barzilay
Byron C. Wallace
54
116
0
02 Apr 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
87
1,307
0
26 Feb 2019
Evidence Sentence Extraction for Machine Reading Comprehension
Hai Wang
Dian Yu
Kai Sun
Jianshu Chen
Dong Yu
David A. McAllester
Dan Roth
53
56
0
23 Feb 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
375
634
0
04 Dec 2018
Towards Explainable NLP: A Generative Explanation Framework for Text Classification
Hui Liu
Qingyu Yin
William Yang Wang
62
148
0
01 Nov 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
882
93,936
0
11 Oct 2018
Faithful Multimodal Explanation for Visual Question Answering
Jialin Wu
Raymond J. Mooney
32
91
0
08 Sep 2018
TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim Verification
Wenpeng Yin
Dan Roth
AAML
34
73
0
10 Aug 2018
Know What You Don't Know: Unanswerable Questions for SQuAD
Pranav Rajpurkar
Robin Jia
Percy Liang
RALM
ELM
179
2,818
0
11 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
70
1,849
0
31 May 2018
Pathologies of Neural Models Make Interpretations Difficult
Shi Feng
Eric Wallace
Alvin Grissom II
Mohit Iyyer
Pedro Rodriguez
Jordan L. Boyd-Graber
AAML
FAtt
55
317
0
20 Apr 2018
AllenNLP: A Deep Semantic Natural Language Processing Platform
Matt Gardner
Joel Grus
Mark Neumann
Oyvind Tafjord
Pradeep Dasigi
Nelson F. Liu
Matthew E. Peters
Michael Schmitz
Luke Zettlemoyer
VLM
47
1,280
0
20 Mar 2018
FEVER: a large-scale dataset for Fact Extraction and VERification
James Thorne
Andreas Vlachos
Christos Christodoulopoulos
Arpit Mittal
HILM
105
1,621
0
14 Mar 2018
Annotation Artifacts in Natural Language Inference Data
Suchin Gururangan
Swabha Swayamdipta
Omer Levy
Roy Schwartz
Samuel R. Bowman
Noah A. Smith
91
1,167
0
06 Mar 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
81
3,922
0
06 Feb 2018
Simple and Effective Multi-Paragraph Reading Comprehension
Christopher Clark
Matt Gardner
RALM
60
456
0
29 Oct 2017
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Chris J. Maddison
A. Mnih
Yee Whye Teh
BDL
104
2,518
0
02 Nov 2016
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
81
807
0
13 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
519
16,765
0
16 Feb 2016
Neural Module Networks
Jacob Andreas
Marcus Rohrbach
Trevor Darrell
Dan Klein
CoGe
100
1,066
0
09 Nov 2015
1