ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.01193
  4. Cited By
e-SNLI: Natural Language Inference with Natural Language Explanations

e-SNLI: Natural Language Inference with Natural Language Explanations

4 December 2018
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
    LRM
ArXivPDFHTML

Papers citing "e-SNLI: Natural Language Inference with Natural Language Explanations"

50 / 425 papers shown
Title
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAtt
OOD
27
1
0
05 Apr 2021
Explaining the Road Not Taken
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
27
9
0
27 Mar 2021
Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers
Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers
Kamil Bujel
H. Yannakoudakis
Marek Rei
VLM
19
8
0
26 Mar 2021
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning
  Performance of GPT-2
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Gregor Betz
Kyle Richardson
Christian Voigt
ReLM
LRM
24
30
0
24 Mar 2021
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
Dheeraj Rajagopal
Vidhisha Balachandran
Eduard H. Hovy
Yulia Tsvetkov
MILM
SSL
FAtt
AI4TS
19
65
0
23 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
38
48
0
20 Mar 2021
SILT: Efficient transformer training for inter-lingual inference
SILT: Efficient transformer training for inter-lingual inference
Javier Huertas-Tato
Alejandro Martín
David Camacho
27
11
0
17 Mar 2021
A Study of Automatic Metrics for the Evaluation of Natural Language
  Explanations
A Study of Automatic Metrics for the Evaluation of Natural Language Explanations
Miruna Clinciu
Arash Eshghi
H. Hastie
53
54
0
15 Mar 2021
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
Tal Schuster
Adam Fisch
Regina Barzilay
36
226
0
15 Mar 2021
Rissanen Data Analysis: Examining Dataset Characteristics via
  Description Length
Rissanen Data Analysis: Examining Dataset Characteristics via Description Length
Ethan Perez
Douwe Kiela
Kyunghyun Cho
24
24
0
05 Mar 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural
  Language Processing
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
16
141
0
24 Feb 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
25
87
0
03 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
269
346
0
01 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
41
102
0
31 Dec 2020
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
  Generation
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia
Bhargavi Paranjape
Asish Ghoshal
Wen-tau Yih
Yashar Mehdad
Srini Iyer
25
27
0
31 Dec 2020
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González
Gagan Bansal
Angela Fan
Robin Jia
Yashar Mehdad
Srini Iyer
AAML
31
24
0
30 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
33
119
0
27 Dec 2020
To what extent do human explanations of model behavior align with actual
  model behavior?
To what extent do human explanations of model behavior align with actual model behavior?
Grusha Prasad
Yixin Nie
Joey Tianyi Zhou
Robin Jia
Douwe Kiela
Adina Williams
31
28
0
24 Dec 2020
ProofWriter: Generating Implications, Proofs, and Abductive Statements
  over Natural Language
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
Oyvind Tafjord
Bhavana Dalvi
Peter Clark
21
256
0
24 Dec 2020
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
Binny Mathew
Punyajoy Saha
Seid Muhie Yimam
Chris Biemann
Pawan Goyal
Animesh Mukherjee
32
548
0
18 Dec 2020
LIREx: Augmenting Language Inference with Relevant Explanation
LIREx: Augmenting Language Inference with Relevant Explanation
Xinyan Zhao
V. Vydiswaran
LRM
28
37
0
16 Dec 2020
Learning from the Best: Rationalizing Prediction by Adversarial
  Information Calibration
Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
133
35
0
16 Dec 2020
Learning to Rationalize for Nonmonotonic Reasoning with Distant
  Supervision
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
An Investigation of Language Model Interpretability via Sentence Editing
An Investigation of Language Model Interpretability via Sentence Editing
Samuel Stevens
Yu-Chuan Su
LRM
15
8
0
28 Nov 2020
Explainable Automated Fact-Checking: A Survey
Explainable Automated Fact-Checking: A Survey
Neema Kotonya
Francesca Toni
6
112
0
07 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
170
0
24 Oct 2020
Semantics of the Black-Box: Can knowledge graphs help make deep learning
  systems more interpretable and explainable?
Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
Manas Gaur
Keyur Faldu
A. Sheth
37
113
0
16 Oct 2020
Natural Language Rationales with Full-Stack Visual Reasoning: From
  Pixels to Semantic Frames to Commonsense Graphs
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović
Chandra Bhagavatula
J. S. Park
Ronan Le Bras
Noah A. Smith
Yejin Choi
ReLM
LRM
18
62
0
15 Oct 2020
F1 is Not Enough! Models and Evaluation Towards User-Centered
  Explainable Question Answering
F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Hendrik Schuff
Heike Adel
Ngoc Thang Vu
ELM
16
18
0
13 Oct 2020
Evaluating and Characterizing Human Rationales
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
22
48
0
09 Oct 2020
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
  Explanations of Their Behavior in Natural Language?
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
Peter Hase
Shiyue Zhang
Harry Xie
Joey Tianyi Zhou
26
99
0
08 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Learning to Explain: Datasets and Models for Identifying Valid Reasoning
  Chains in Multihop Question-Answering
Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering
Harsh Jhamtani
Peter Clark
LRM
18
70
0
07 Oct 2020
PRover: Proof Generation for Interpretable Reasoning over Rules
PRover: Proof Generation for Interpretable Reasoning over Rules
Swarnadeep Saha
Sayan Ghosh
Shashank Srivastava
Joey Tianyi Zhou
ReLM
LRM
34
77
0
06 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
33
26
0
04 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
A Survey on Explainability in Machine Reading Comprehension
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
12
50
0
01 Oct 2020
Case-Based Abductive Natural Language Inference
Case-Based Abductive Natural Language Inference
Marco Valentino
Mokanarangan Thayaparan
André Freitas
20
5
0
30 Sep 2020
XTE: Explainable Text Entailment
XTE: Explainable Text Entailment
V. S. Silva
André Freitas
Siegfried Handschuh
28
6
0
25 Sep 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
17
219
0
25 Sep 2020
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
  Sufficient Subsets
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
23
23
0
23 Sep 2020
QED: A Framework and Dataset for Explanations in Question Answering
QED: A Framework and Dataset for Explanations in Question Answering
Matthew Lamm
J. Palomaki
Chris Alberti
D. Andor
Eunsol Choi
Livio Baldini Soares
Michael Collins
16
68
0
08 Sep 2020
Compositional Explanations of Neurons
Compositional Explanations of Neurons
Jesse Mu
Jacob Andreas
FAtt
CoGe
MILM
14
172
0
24 Jun 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal
  Transport
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
29
37
0
27 May 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAI
LRM
13
159
0
25 May 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
23
164
0
14 May 2020
ExpBERT: Representation Engineering with Natural Language Explanations
ExpBERT: Representation Engineering with Natural Language Explanations
Shikhar Murty
Pang Wei Koh
Percy Liang
46
43
0
05 May 2020
What-if I ask you to explain: Explaining the effects of perturbations in
  procedural text
What-if I ask you to explain: Explaining the effects of perturbations in procedural text
Dheeraj Rajagopal
Niket Tandon
Bhavana Dalvi
Peter Clarke
Eduard H. Hovy
28
14
0
04 May 2020
Teaching Machine Comprehension with Compositional Explanations
Teaching Machine Comprehension with Compositional Explanations
Qinyuan Ye
Xiao Huang
Elizabeth Boschee
Xiang Ren
LRM
ReLM
19
34
0
02 May 2020
Previous
123456789
Next