ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00893
  4. Cited By
Evaluating Explanations: How much do explanations from the teacher aid
  students?

Evaluating Explanations: How much do explanations from the teacher aid students?

1 December 2020
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Evaluating Explanations: How much do explanations from the teacher aid students?"

50 / 83 papers shown
Title
VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas
Georgios Makridis
Vasileios Koukos
G. Fatouros
D. Kyriazis
82
1
0
06 Mar 2025
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Jingheng Ye
Shang Qin
Hai-Tao Zheng
Hai-Tao Zheng
Shen Wang
Qingsong Wen
60
0
0
24 Feb 2025
TAGExplainer: Narrating Graph Explanations for Text-Attributed Graph
  Learning Models
TAGExplainer: Narrating Graph Explanations for Text-Attributed Graph Learning Models
Bo Pan
Zhen Xiong
Guanchen Wu
Zheng Zhang
Yifei Zhang
Liang Zhao
FAtt
39
1
0
20 Oct 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
51
1
0
23 Jul 2024
Data-Centric Human Preference Optimization with Rationales
Data-Centric Human Preference Optimization with Rationales
H. Just
Ming Jin
Anit Kumar Sahu
Huy Phan
Ruoxi Jia
57
3
0
19 Jul 2024
Retrieved In-Context Principles from Previous Mistakes
Retrieved In-Context Principles from Previous Mistakes
Hao Sun
Yong-jia Jiang
Bo Wang
Yingyan Hou
Yan Zhang
Pengjun Xie
Fei Huang
63
1
0
08 Jul 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
66
2
0
24 Jun 2024
Understanding Understanding: A Pragmatic Framework Motivated by Large
  Language Models
Understanding Understanding: A Pragmatic Framework Motivated by Large Language Models
Kevin Leyton-Brown
Y. Shoham
ELM
30
0
0
16 Jun 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
37
1
0
17 May 2024
Explanation based Bias Decoupling Regularization for Natural Language
  Inference
Explanation based Bias Decoupling Regularization for Natural Language Inference
Jianxiang Zang
Hui Liu
16
0
0
20 Apr 2024
The Role of Syntactic Span Preferences in Post-Hoc Explanation
  Disagreement
The Role of Syntactic Span Preferences in Post-Hoc Explanation Disagreement
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
30
1
0
28 Mar 2024
RORA: Robust Free-Text Rationale Evaluation
RORA: Robust Free-Text Rationale Evaluation
Zhengping Jiang
Yining Lu
Hanjie Chen
Daniel Khashabi
Benjamin Van Durme
Anqi Liu
53
1
0
28 Feb 2024
TPD: Enhancing Student Language Model Reasoning via Principle Discovery
  and Guidance
TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance
Haorui Wang
Rongzhi Zhang
Yinghao Li
Lingkai Kong
Yuchen Zhuang
Xiusi Chen
Chao Zhang
LRM
43
5
0
24 Jan 2024
Generating Zero-shot Abstractive Explanations for Rumour Verification
Generating Zero-shot Abstractive Explanations for Rumour Verification
I. Bilal
Preslav Nakov
Rob Procter
M. Liakata
24
0
0
23 Jan 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
56
7
0
20 Dec 2023
Evaluating the Utility of Model Explanations for Model Development
Evaluating the Utility of Model Explanations for Model Development
Shawn Im
Jacob Andreas
Yilun Zhou
XAI
FAtt
ELM
32
1
0
10 Dec 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in
  Interpretability in Large Language Models
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
49
3
0
03 Nov 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation
  Regularization
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
Mohammad Reza Ghasemi Madani
Pasquale Minervini
40
4
0
22 Oct 2023
Rephrase, Augment, Reason: Visual Grounding of Questions for
  Vision-Language Models
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Archiki Prasad
Elias Stengel-Eskin
Mohit Bansal
ReLM
LRM
36
8
0
09 Oct 2023
Measuring Information in Text Explanations
Measuring Information in Text Explanations
Zining Zhu
Frank Rudzicz
FAtt
37
0
0
06 Oct 2023
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
40
29
0
01 Oct 2023
Learning by Self-Explaining
Learning by Self-Explaining
Wolfgang Stammer
Felix Friedrich
David Steinmann
Manuel Brack
Hikaru Shindo
Kristian Kersting
34
7
0
15 Sep 2023
Goodhart's Law Applies to NLP's Explanation Benchmarks
Goodhart's Law Applies to NLP's Explanation Benchmarks
Jennifer Hsia
Danish Pruthi
Aarti Singh
Zachary Chase Lipton
30
6
0
28 Aug 2023
Can Authorship Representation Learning Capture Stylistic Features?
Can Authorship Representation Learning Capture Stylistic Features?
Andrew Wang
Cristina Aggazzotti
R. Kotula
Rafael Rivera Soto
M. Bishop
Matthew Wiesner
AI4TS
30
12
0
22 Aug 2023
Exploring the Landscape of Natural Language Processing Research
Exploring the Landscape of Natural Language Processing Research
Tim Schopf
Karim Arabi
Florian Matthes
21
13
0
20 Jul 2023
Can Language Models Teach Weaker Agents? Teacher Explanations Improve
  Students via Personalization
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
Swarnadeep Saha
Peter Hase
Mohit Bansal
LRM
30
10
0
15 Jun 2023
CREST: A Joint Framework for Rationalization and Counterfactual Text
  Generation
CREST: A Joint Framework for Rationalization and Counterfactual Text Generation
Marcos Vinícius Treviso
Alexis Ross
Nuno M. Guerreiro
André F.T. Martins
36
16
0
26 May 2023
Counterfactuals of Counterfactuals: a back-translation-inspired approach
  to analyse counterfactual editors
Counterfactuals of Counterfactuals: a back-translation-inspired approach to analyse counterfactual editors
Giorgos Filandrianos
Edmund Dervakos
Orfeas Menis Mastromichalakis
Chrysoula Zerva
Giorgos Stamou
AAML
37
5
0
26 May 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
34
1
0
25 May 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
224
506
0
03 May 2023
Answering Questions by Meta-Reasoning over Multiple Chains of Thought
Answering Questions by Meta-Reasoning over Multiple Chains of Thought
Ori Yoran
Tomer Wolfson
Ben Bogin
Uri Katz
Daniel Deutch
Jonathan Berant
ReLM
LRM
KELM
26
95
0
25 Apr 2023
Computational modeling of semantic change
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
38
6
0
13 Apr 2023
Training Language Models with Language Feedback at Scale
Training Language Models with Language Feedback at Scale
Jérémy Scheurer
Jon Ander Campos
Tomasz Korbak
Jun Shern Chan
Angelica Chen
Kyunghyun Cho
Ethan Perez
ALM
50
103
0
28 Mar 2023
Improving Code Generation by Training with Natural Language Feedback
Improving Code Generation by Training with Natural Language Feedback
Angelica Chen
Jérémy Scheurer
Tomasz Korbak
Jon Ander Campos
Jun Shern Chan
Samuel R. Bowman
Kyunghyun Cho
Ethan Perez
SyDa
ALM
AI4CE
39
76
0
28 Mar 2023
Quantifying Context Mixing in Transformers
Quantifying Context Mixing in Transformers
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
A. Alishahi
176
25
0
30 Jan 2023
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
26
8
0
23 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRM
ReLM
21
59
0
03 Nov 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
32
11
0
24 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
49
568
0
20 Oct 2022
Challenges in Explanation Quality Evaluation
Challenges in Explanation Quality Evaluation
Hendrik Schuff
Heike Adel
Peng Qi
Ngoc Thang Vu
XAI
38
3
0
13 Oct 2022
Assessing Out-of-Domain Language Model Performance from Few Examples
Assessing Out-of-Domain Language Model Performance from Few Examples
Prasann Singhal
Jarad Forristal
Xi Ye
Greg Durrett
LRM
25
5
0
13 Oct 2022
REV: Information-Theoretic Evaluation of Free-Text Rationales
REV: Information-Theoretic Evaluation of Free-Text Rationales
Hanjie Chen
Faeze Brahman
Xiang Ren
Yangfeng Ji
Yejin Choi
Swabha Swayamdipta
92
23
0
10 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
117
110
0
22 Sep 2022
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
19
2
0
07 Sep 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text
  Rationales
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
55
9
0
02 Jul 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
27
24
0
05 Jun 2022
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model
  Behavior
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Eldar David Abraham
Karel DÓosterlinck
Amir Feder
Y. Gat
Atticus Geiger
Christopher Potts
Roi Reichart
Zhengxuan Wu
CML
36
44
0
27 May 2022
ER-Test: Evaluating Explanation Regularization Methods for Language
  Models
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
38
6
0
25 May 2022
Logical Satisfiability of Counterfactuals for Faithful Explanations in
  NLI
Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Suzanna Sia
Anton Belyy
Amjad Almahairi
Madian Khabsa
Luke Zettlemoyer
Lambert Mathias
LRM
27
13
0
25 May 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAI
FAtt
61
16
0
17 May 2022
12
Next