ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08479
  4. Cited By
Plausible Extractive Rationalization through Semi-Supervised Entailment
  Signal

Plausible Extractive Rationalization through Semi-Supervised Entailment Signal

13 February 2024
Yeo Wei Jie
Ranjan Satapathy
Min Zhang
ArXivPDFHTML

Papers citing "Plausible Extractive Rationalization through Semi-Supervised Entailment Signal"

18 / 18 papers shown
Title
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
Wei-Lin Chen
An-Zi Yen
Cheng-Kuang Wu
Hen-Hsen Huang
Hsin-Hsi Chen
ReLM
LRM
40
11
0
12 May 2023
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning
O. Yu. Golovneva
Moya Chen
Spencer Poff
Martin Corredor
Luke Zettlemoyer
Maryam Fazel-Zarandi
Asli Celikyilmaz
ReLM
LRM
82
148
0
15 Dec 2022
SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in
  Summarization
SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization
Philippe Laban
Tobias Schnabel
Paul N. Bennett
Marti A. Hearst
HILM
92
393
0
18 Nov 2021
Enjoy the Salience: Towards Better Transformer-based Faithful
  Explanations with Word Salience
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
62
18
0
31 Aug 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
66
38
0
06 May 2021
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
  Generation
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia
Bhargavi Paranjape
Asish Ghoshal
Wen-tau Yih
Yashar Mehdad
Srini Iyer
36
28
0
31 Dec 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
74
26
0
07 Oct 2020
Why Attentions May Not Be Interpretable?
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei Wang
FAtt
43
60
0
10 Jun 2020
An Information Bottleneck Approach for Controlling Conciseness in
  Rationale Extraction
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape
Mandar Joshi
John Thickstun
Hannaneh Hajishirzi
Luke Zettlemoyer
57
101
0
01 May 2020
Evaluating the Factual Consistency of Abstractive Text Summarization
Evaluating the Factual Consistency of Abstractive Text Summarization
Wojciech Kry'sciñski
Bryan McCann
Caiming Xiong
R. Socher
HILM
101
743
0
28 Oct 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
112
909
0
13 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
538
24,422
0
26 Jul 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
101
683
0
09 Jun 2019
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
210
1,516
0
24 May 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
126
1,323
0
26 Feb 2019
FEVER: a large-scale dataset for Fact Extraction and VERification
FEVER: a large-scale dataset for Fact Extraction and VERification
James Thorne
Andreas Vlachos
Christos Christodoulopoulos
Arpit Mittal
HILM
121
1,646
0
14 Mar 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,815
0
22 May 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.1K
16,931
0
16 Feb 2016
1