ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10810
  4. Cited By
Learning to Scaffold: Optimizing Model Explanations for Teaching

Learning to Scaffold: Optimizing Model Explanations for Teaching

22 April 2022
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
    FAtt
ArXivPDFHTML

Papers citing "Learning to Scaffold: Optimizing Model Explanations for Teaching"

21 / 21 papers shown
Title
Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization
W. Liu
Zhiying Deng
Zhongyu Niu
Jun Wang
Haozhao Wang
Zhigang Zeng
Ruixuan Li
44
2
0
08 Mar 2025
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
43
1
0
23 Jul 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
27
1
0
17 May 2024
Explainable Bayesian Optimization
Explainable Bayesian Optimization
Tanmay Chakraborty
Christin Seifert
Christian Wirth
58
5
0
24 Jan 2024
Enhancing the Rationale-Input Alignment for Self-explaining
  Rationalization
Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Wei Liu
Haozhao Wang
Jun Wang
Zhiying Deng
Yuankai Zhang
Chengwei Wang
Ruixuan Li
32
9
0
07 Dec 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in
  Interpretability in Large Language Models
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
41
3
0
03 Nov 2023
D-Separation for Causal Self-Explanation
D-Separation for Causal Self-Explanation
Wei Liu
Jun Wang
Haozhao Wang
Rui Li
Zhiying Deng
YuanKai Zhang
Yang Qiu
82
14
0
23 Sep 2023
Towards Explainable Evaluation Metrics for Machine Translation
Towards Explainable Evaluation Metrics for Machine Translation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
ELM
28
13
0
22 Jun 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
19
1
0
25 May 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible
  Lipschitz Restraint
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint
Wei Liu
Jun Wang
Haozhao Wang
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
44
12
0
23 May 2023
The Inside Story: Towards Better Understanding of Machine Translation
  Neural Evaluation Metrics
The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics
Ricardo Rei
Nuno M. Guerreiro
Marcos Vinícius Treviso
Luísa Coheur
A. Lavie
André F.T. Martins
27
15
0
19 May 2023
ExaRanker: Explanation-Augmented Neural Ranker
ExaRanker: Explanation-Augmented Neural Ranker
Fernando Ferraretto
Thiago Laitz
R. Lotufo
Rodrigo Nogueira
ELM
LRM
28
7
0
25 Jan 2023
Explanation Regeneration via Information Bottleneck
Explanation Regeneration via Information Bottleneck
Qintong Li
Zhiyong Wu
Lingpeng Kong
Wei Bi
22
3
0
19 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
21
8
0
23 Nov 2022
Induced Natural Language Rationales and Interleaved Markup Tokens Enable
  Extrapolation in Large Language Models
Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
M. Bueno
Carlos Gemmel
Jeffrey Stephen Dalton
R. Lotufo
Rodrigo Nogueira
LRM
22
12
0
24 Aug 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
115
75
0
14 Nov 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
179
687
0
22 Apr 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
329
11,684
0
09 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1