ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.00071
  4. Cited By
What to Learn, and How: Toward Effective Learning from Rationales

What to Learn, and How: Toward Effective Learning from Rationales

30 November 2021
Samuel Carton
Surya Kanoria
Chenhao Tan
ArXivPDFHTML

Papers citing "What to Learn, and How: Toward Effective Learning from Rationales"

19 / 19 papers shown
Title
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
43
1
0
23 Jul 2024
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
55
3
0
28 Jun 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
47
1
0
03 Apr 2024
Towards Faithful Explanations: Boosting Rationalization with Shortcuts
  Discovery
Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
Linan Yue
Qi Liu
Yichao Du
Li Wang
Weibo Gao
Yanqing An
32
5
0
12 Mar 2024
Identifying Self-Disclosures of Use, Misuse and Addiction in
  Community-based Social Media Posts
Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts
Chenghao Yang
Tuhin Chakrabarty
K. Hochstatter
M. Slavin
N. El-Bassel
Smaranda Muresan
24
2
0
15 Nov 2023
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
  Fact Verification
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
Jiasheng Si
Yingjie Zhu
Deyu Zhou
AAML
46
3
0
16 May 2023
Neglected Free Lunch -- Learning Image Classifiers Using Annotation
  Byproducts
Neglected Free Lunch -- Learning Image Classifiers Using Annotation Byproducts
Dongyoon Han
Junsuk Choe
Dante Chun
John Joon Young Chung
Minsuk Chang
Sangdoo Yun
Jean Y. Song
Seong Joon Oh
OOD
424
4
1
30 Mar 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
18
43
0
23 Jan 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
On Faithfulness and Coherence of Language Explanations for
  Recommendation Systems
On Faithfulness and Coherence of Language Explanations for Recommendation Systems
Zhouhang Xie
Julian McAuley
Bodhisattwa Prasad Majumder
LRM
22
1
0
12 Sep 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
30
16
0
13 Jun 2022
Investigating the Benefits of Free-Form Rationales
Investigating the Benefits of Free-Form Rationales
Jiao Sun
Swabha Swayamdipta
Jonathan May
Xuezhe Ma
11
14
0
25 May 2022
ER-Test: Evaluating Explanation Regularization Methods for Language
  Models
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
35
6
0
25 May 2022
Logical Reasoning with Span-Level Predictions for Interpretable and
  Robust NLI Models
Logical Reasoning with Span-Level Predictions for Interpretable and Robust NLI Models
Joe Stacey
Pasquale Minervini
Haim Dubossarsky
Marek Rei
ReLM
LRM
19
14
0
23 May 2022
A survey on improving NLP models with human explanations
A survey on improving NLP models with human explanations
Mareike Hartmann
Daniel Sonntag
LRM
27
21
0
19 Apr 2022
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
82
70
0
02 Mar 2021
Learning from the Best: Rationalizing Prediction by Adversarial
  Information Calibration
Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
127
35
0
16 Dec 2020
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
179
201
0
22 Mar 2020
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
255
620
0
04 Dec 2018
1