ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.01193
  4. Cited By
e-SNLI: Natural Language Inference with Natural Language Explanations

e-SNLI: Natural Language Inference with Natural Language Explanations

4 December 2018
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
    LRM
ArXivPDFHTML

Papers citing "e-SNLI: Natural Language Inference with Natural Language Explanations"

50 / 425 papers shown
Title
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
40
10
0
27 Jul 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
46
1
0
23 Jul 2024
SwitchCIT: Switching for Continual Instruction Tuning of Large Language
  Models
SwitchCIT: Switching for Continual Instruction Tuning of Large Language Models
Xinbo Wu
Max Hartman
Vidhata Arjun Jayaraman
L. Varshney
CLL
LRM
34
1
0
16 Jul 2024
RVISA: Reasoning and Verification for Implicit Sentiment Analysis
RVISA: Reasoning and Verification for Implicit Sentiment Analysis
Wenna Lai
H. Xie
Guandong Xu
Qing Li
LRM
39
1
0
02 Jul 2024
Survey on Knowledge Distillation for Large Language Models: Methods,
  Evaluation, and Application
Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application
Chuanpeng Yang
Wang Lu
Yao Zhu
Yidong Wang
Qian Chen
Chenlong Gao
Bingjie Yan
Yiqiang Chen
ALM
KELM
44
23
0
02 Jul 2024
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
65
3
0
28 Jun 2024
A look under the hood of the Interactive Deep Learning Enterprise
  (No-IDLE)
A look under the hood of the Interactive Deep Learning Enterprise (No-IDLE)
Daniel Sonntag
Michael Barz
Thiago S. Gouvêa
VLM
52
4
0
27 Jun 2024
LLMs instead of Human Judges? A Large Scale Empirical Study across 20
  NLP Evaluation Tasks
LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks
A. Bavaresco
Raffaella Bernardi
Leonardo Bertolazzi
Desmond Elliott
Raquel Fernández
...
David Schlangen
Alessandro Suglia
Aditya K Surikuchi
Ece Takmaz
A. Testoni
ALM
ELM
54
62
0
26 Jun 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
61
1
0
24 Jun 2024
PORT: Preference Optimization on Reasoning Traces
PORT: Preference Optimization on Reasoning Traces
Salem Lahlou
Abdalgader Abubaker
Hakim Hacid
LRM
41
2
0
23 Jun 2024
CaT-BENCH: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans
CaT-BENCH: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans
Yash Kumar Lal
Vanya Cohen
Nathanael Chambers
Niranjan Balasubramanian
Raymond Mooney
ELM
LRM
ReLM
39
3
0
22 Jun 2024
Accurate and Nuanced Open-QA Evaluation Through Textual Entailment
Accurate and Nuanced Open-QA Evaluation Through Textual Entailment
Peiran Yao
Denilson Barbosa
ELM
32
6
0
26 May 2024
Sign of the Times: Evaluating the use of Large Language Models for
  Idiomaticity Detection
Sign of the Times: Evaluating the use of Large Language Models for Idiomaticity Detection
Dylan Phelps
Thomas Pickard
Maggie Mi
Edward Gow-Smith
Aline Villavicencio
50
4
0
15 May 2024
Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation
  of Non-Literal Intent Resolution in LLMs
Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs
Akhila Yerukola
Saujas Vaduguru
Daniel Fried
Maarten Sap
37
1
0
14 May 2024
QCRD: Quality-guided Contrastive Rationale Distillation for Large
  Language Models
QCRD: Quality-guided Contrastive Rationale Distillation for Large Language Models
Wei Wang
Zhaowei Li
Qi Xu
Yiqing Cai
Hang Song
Qi Qi
Ran Zhou
Zhida Huang
Tao Wang
Li Xiao
ALM
40
1
0
14 May 2024
The Effect of Model Size on LLM Post-hoc Explainability via LIME
The Effect of Model Size on LLM Post-hoc Explainability via LIME
Henning Heyen
Amy Widdicombe
Noah Y. Siegel
Maria Perez-Ortiz
Philip C. Treleaven
LRM
32
1
0
08 May 2024
Zero-shot LLM-guided Counterfactual Generation for Text
Zero-shot LLM-guided Counterfactual Generation for Text
Amrita Bhattacharjee
Raha Moraffah
Joshua Garland
Huan Liu
46
4
0
08 May 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
53
2
0
06 May 2024
Identification of Entailment and Contradiction Relations between Natural
  Language Sentences: A Neurosymbolic Approach
Identification of Entailment and Contradiction Relations between Natural Language Sentences: A Neurosymbolic Approach
Xuyao Feng
Anthony Hunter
19
2
0
02 May 2024
CEval: A Benchmark for Evaluating Counterfactual Text Generation
CEval: A Benchmark for Evaluating Counterfactual Text Generation
Van Bach Nguyen
Jorg Schlotterer
Christin Seifert
36
6
0
26 Apr 2024
Aligning Knowledge Graphs Provided by Humans and Generated from Neural
  Networks in Specific Tasks
Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks
Tangrui Li
Jun Zhou
43
0
0
23 Apr 2024
Marking: Visual Grading with Highlighting Errors and Annotating Missing
  Bits
Marking: Visual Grading with Highlighting Errors and Annotating Missing Bits
Shashank Sonkar
Naiming Liu
D. B. Mallick
Richard G. Baraniuk
35
4
0
22 Apr 2024
Explanation based Bias Decoupling Regularization for Natural Language
  Inference
Explanation based Bias Decoupling Regularization for Natural Language Inference
Jianxiang Zang
Hui Liu
16
0
0
20 Apr 2024
The Probabilities Also Matter: A More Faithful Metric for Faithfulness
  of Free-Text Explanations in Large Language Models
The Probabilities Also Matter: A More Faithful Metric for Faithfulness of Free-Text Explanations in Large Language Models
Noah Y. Siegel
Oana-Maria Camburu
N. Heess
Maria Perez-Ortiz
26
8
0
04 Apr 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
50
1
0
03 Apr 2024
Using Interpretation Methods for Model Enhancement
Using Interpretation Methods for Model Enhancement
Zhuo Chen
Chengyue Jiang
Kewei Tu
21
2
0
02 Apr 2024
Towards a Framework for Evaluating Explanations in Automated Fact
  Verification
Towards a Framework for Evaluating Explanations in Automated Fact Verification
Neema Kotonya
Francesca Toni
32
5
0
29 Mar 2024
Can LLMs Learn from Previous Mistakes? Investigating LLMs' Errors to
  Boost for Reasoning
Can LLMs Learn from Previous Mistakes? Investigating LLMs' Errors to Boost for Reasoning
Yongqi Tong
Dawei Li
Sizhe Wang
Yujia Wang
Fei Teng
Jingbo Shang
LRM
34
46
0
29 Mar 2024
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on
  Which Scales?
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?
Fan Huang
Haewoon Kwak
Kunwoo Park
Jisun An
ALM
ELM
AI4MH
40
12
0
26 Mar 2024
Learning To Guide Human Decision Makers With Vision-Language Models
Learning To Guide Human Decision Makers With Vision-Language Models
Debodeep Banerjee
Stefano Teso
Burcu Sayin
Andrea Passerini
37
1
0
25 Mar 2024
A Logical Pattern Memory Pre-trained Model for Entailment Tree
  Generation
A Logical Pattern Memory Pre-trained Model for Entailment Tree Generation
Li Yuan
Yi Cai
Haopeng Ren
Jiexin Wang
LRM
22
5
0
11 Mar 2024
Explaining Pre-Trained Language Models with Attribution Scores: An
  Analysis in Low-Resource Settings
Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Wei Zhou
Heike Adel
Hendrik Schuff
Ngoc Thang Vu
LRM
32
2
0
08 Mar 2024
Learning to Maximize Mutual Information for Chain-of-Thought
  Distillation
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
35
12
0
05 Mar 2024
Finetuned Multimodal Language Models Are High-Quality Image-Text Data
  Filters
Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters
Weizhi Wang
Khalil Mrini
Linjie Yang
Sateesh Kumar
Yu Tian
Xifeng Yan
Heng Wang
38
16
0
05 Mar 2024
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
  Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Stephanie Brandl
Oliver Eberle
Tiago F. R. Ribeiro
Anders Søgaard
Nora Hollenstein
40
1
0
29 Feb 2024
RORA: Robust Free-Text Rationale Evaluation
RORA: Robust Free-Text Rationale Evaluation
Zhengping Jiang
Yining Lu
Hanjie Chen
Daniel Khashabi
Benjamin Van Durme
Anqi Liu
53
1
0
28 Feb 2024
Making Reasoning Matter: Measuring and Improving Faithfulness of
  Chain-of-Thought Reasoning
Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning
Debjit Paul
Robert West
Antoine Bosselut
Boi Faltings
ReLM
LRM
41
21
0
21 Feb 2024
ELAD: Explanation-Guided Large Language Models Active Distillation
ELAD: Explanation-Guided Large Language Models Active Distillation
Yifei Zhang
Bo Pan
Chen Ling
Yuntong Hu
Liang Zhao
46
6
0
20 Feb 2024
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc
  Explanations
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations
Milan Bhan
Jean-Noel Vittaut
Nicolas Chesneau
Marie-Jeanne Lesot
ReLM
LRM
34
3
0
19 Feb 2024
How Interpretable are Reasoning Explanations from Prompting Large
  Language Models?
How Interpretable are Reasoning Explanations from Prompting Large Language Models?
Yeo Wei Jie
Ranjan Satapathy
Rick Mong
Min Zhang
ReLM
LRM
57
16
0
19 Feb 2024
I Learn Better If You Speak My Language: Understanding the Superior
  Performance of Fine-Tuning Large Language Models with LLM-Generated Responses
I Learn Better If You Speak My Language: Understanding the Superior Performance of Fine-Tuning Large Language Models with LLM-Generated Responses
Xuan Ren
Biao Wu
Lingqiao Liu
36
5
0
17 Feb 2024
Properties and Challenges of LLM-Generated Explanations
Properties and Challenges of LLM-Generated Explanations
Jenny Kunz
Marco Kuhlmann
30
20
0
16 Feb 2024
Inference to the Best Explanation in Large Language Models
Inference to the Best Explanation in Large Language Models
Dhairya Dalal
Marco Valentino
André Freitas
Paul Buitelaar
LRM
ELM
54
3
0
16 Feb 2024
LogicPrpBank: A Corpus for Logical Implication and Equivalence
LogicPrpBank: A Corpus for Logical Implication and Equivalence
Zhexiong Liu
Jing Zhang
Jiaying Lu
Wenjing Ma
Joyce C. Ho
ReLM
LRM
42
0
0
14 Feb 2024
Show Me How It's Done: The Role of Explanations in Fine-Tuning Language
  Models
Show Me How It's Done: The Role of Explanations in Fine-Tuning Language Models
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kuehnberger
LRM
32
3
0
12 Feb 2024
A Hypothesis-Driven Framework for the Analysis of Self-Rationalising
  Models
A Hypothesis-Driven Framework for the Analysis of Self-Rationalising Models
Marc Braun
Jenny Kunz
16
2
0
07 Feb 2024
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
M. N. Jahromi
Satya M. Muddamsetty
Asta Sofie Stage Jarlner
Anna Murphy Hogenhaug
Thomas Gammeltoft-Hansen
T. Moeslund
27
2
0
05 Feb 2024
Rethinking Interpretability in the Era of Large Language Models
Rethinking Interpretability in the Era of Large Language Models
Chandan Singh
J. Inala
Michel Galley
Rich Caruana
Jianfeng Gao
LRM
AI4CE
77
62
0
30 Jan 2024
LLMCheckup: Conversational Examination of Large Language Models via
  Interpretability Tools and Self-Explanations
LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations
Qianli Wang
Tatiana Anikina
Nils Feldhus
Josef van Genabith
Leonhard Hennig
Sebastian Möller
ELM
LRM
18
8
0
23 Jan 2024
Explain Thyself Bully: Sentiment Aided Cyberbullying Detection with
  Explanation
Explain Thyself Bully: Sentiment Aided Cyberbullying Detection with Explanation
Krishanu Maity
Prince Jha
Raghav Jain
S. Saha
P. Bhattacharyya
15
1
0
17 Jan 2024
Previous
123456789
Next