ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00893
  4. Cited By
Evaluating Explanations: How much do explanations from the teacher aid
  students?

Evaluating Explanations: How much do explanations from the teacher aid students?

1 December 2020
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Evaluating Explanations: How much do explanations from the teacher aid students?"

33 / 33 papers shown
Title
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
48
1
0
23 Jul 2024
Retrieved In-Context Principles from Previous Mistakes
Retrieved In-Context Principles from Previous Mistakes
Hao Sun
Yong-jia Jiang
Bo Wang
Yingyan Hou
Yan Zhang
Pengjun Xie
Fei Huang
63
1
0
08 Jul 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
66
2
0
24 Jun 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
34
1
0
17 May 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
56
7
0
20 Dec 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
34
1
0
25 May 2023
Computational modeling of semantic change
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
38
6
0
13 Apr 2023
Training Language Models with Language Feedback at Scale
Training Language Models with Language Feedback at Scale
Jérémy Scheurer
Jon Ander Campos
Tomasz Korbak
Jun Shern Chan
Angelica Chen
Kyunghyun Cho
Ethan Perez
ALM
50
103
0
28 Mar 2023
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
26
8
0
23 Nov 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
47
568
0
20 Oct 2022
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
19
2
0
07 Sep 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text
  Rationales
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
52
9
0
02 Jul 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
24
24
0
05 Jun 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
29
25
0
30 Apr 2022
Training Language Models with Language Feedback
Training Language Models with Language Feedback
Jérémy Scheurer
Jon Ander Campos
Jun Shern Chan
Angelica Chen
Kyunghyun Cho
Ethan Perez
ALM
48
48
0
29 Apr 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
30
22
0
22 Apr 2022
Interpreting Language Models with Contrastive Explanations
Interpreting Language Models with Contrastive Explanations
Kayo Yin
Graham Neubig
MILM
23
78
0
21 Feb 2022
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
40
38
0
17 Dec 2021
UNIREX: A Unified Learning Framework for Language Model Rationale
  Extraction
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
43
42
0
16 Dec 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
35
16
0
14 Oct 2021
Influence Tuning: Demoting Spurious Correlations via Instance
  Attribution and Instance-Driven Updates
Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates
Xiaochuang Han
Yulia Tsvetkov
TDI
31
30
0
07 Oct 2021
Diagnostics-Guided Explanation Generation
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
40
6
0
08 Sep 2021
Counterfactual Evaluation for Explainable AI
Counterfactual Evaluation for Explainable AI
Yingqiang Ge
Shuchang Liu
Zelong Li
Shuyuan Xu
Shijie Geng
Yunqi Li
Juntao Tan
Fei Sun
Yongfeng Zhang
CML
38
14
0
05 Sep 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness,
  and Semantic Evaluation
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
31
17
0
09 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
140
0
17 May 2021
On the Sensitivity and Stability of Model Interpretations in NLP
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
19
33
0
18 Apr 2021
Supervising Model Attention with Human Explanations for Robust Natural
  Language Inference
Supervising Model Attention with Human Explanations for Robust Natural Language Inference
Joe Stacey
Yonatan Belinkov
Marek Rei
30
45
0
16 Apr 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
32
9
0
29 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
28
57
0
25 Feb 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
25
87
0
03 Feb 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
41
102
0
31 Dec 2020
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
202
201
0
22 Mar 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
1