ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07586
  4. Cited By
Can Explanations Be Useful for Calibrating Black Box Models?

Can Explanations Be Useful for Calibrating Black Box Models?

14 October 2021
Xi Ye
Greg Durrett
    FAtt
ArXivPDFHTML

Papers citing "Can Explanations Be Useful for Calibrating Black Box Models?"

21 / 21 papers shown
Title
A Survey of Calibration Process for Black-Box LLMs
A Survey of Calibration Process for Black-Box LLMs
Liangru Xie
Hui Liu
Jingying Zeng
Xianfeng Tang
Yan Han
Chen Luo
Jing Huang
Zhen Li
Suhang Wang
Qi He
74
1
0
17 Dec 2024
Self-eXplainable AI for Medical Image Analysis: A Survey and New
  Outlooks
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
Junlin Hou
Sicen Liu
Yequan Bie
Hongmei Wang
Andong Tan
Luyang Luo
Hao Chen
XAI
25
3
0
03 Oct 2024
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised
  Approach
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
Linyu Liu
Yu Pan
Xiaocheng Li
Guanting Chen
30
24
0
24 Apr 2024
Towards Faithful Explanations for Text Classification with Robustness
  Improvement and Explanation Guided Training
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
Dongfang Li
Baotian Hu
Qingcai Chen
Shan He
26
4
0
29 Dec 2023
Decomposing Uncertainty for Large Language Models through Input
  Clarification Ensembling
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling
Bairu Hou
Yujian Liu
Kaizhi Qian
Jacob Andreas
Shiyu Chang
Yang Zhang
UD
UQCV
PER
23
48
0
15 Nov 2023
CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
  Performance and Calibration
CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
Rachneet Sachdeva
Martin Tutek
Iryna Gurevych
OODD
22
10
0
14 Sep 2023
Do Models Explain Themselves? Counterfactual Simulatability of Natural
  Language Explanations
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Yanda Chen
Ruiqi Zhong
Narutatsu Ri
Chen Zhao
He He
Jacob Steinhardt
Zhou Yu
Kathleen McKeown
LRM
26
47
0
17 Jul 2023
Efficient Shapley Values Estimation by Amortization for Text
  Classification
Efficient Shapley Values Estimation by Amortization for Text Classification
Chenghao Yang
Fan Yin
He He
Kai-Wei Chang
Xiaofei Ma
Bing Xiang
FAtt
VLM
15
4
0
31 May 2023
Getting MoRE out of Mixture of Language Model Reasoning Experts
Getting MoRE out of Mixture of Language Model Reasoning Experts
Chenglei Si
Weijia Shi
Chen Zhao
Luke Zettlemoyer
Jordan L. Boyd-Graber
LRM
24
24
0
24 May 2023
Streamlining models with explanations in the learning loop
Streamlining models with explanations in the learning loop
Francesco Lomuscio
P. Bajardi
Alan Perotti
E. Amparore
FAtt
21
0
0
15 Feb 2023
Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large
  Language Models
Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large Language Models
Albert Xu
Xiang Ren
Robin Jia
OODD
22
2
0
28 Nov 2022
Calibration Meets Explanation: A Simple and Effective Approach for Model
  Confidence Estimates
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Dongfang Li
Baotian Hu
Qingcai Chen
11
8
0
06 Nov 2022
A Close Look into the Calibration of Pre-trained Language Models
A Close Look into the Calibration of Pre-trained Language Models
Yangyi Chen
Lifan Yuan
Ganqu Cui
Zhiyuan Liu
Heng Ji
25
43
0
31 Oct 2022
Prompting GPT-3 To Be Reliable
Prompting GPT-3 To Be Reliable
Chenglei Si
Zhe Gan
Zhengyuan Yang
Shuohang Wang
Jianfeng Wang
Jordan L. Boyd-Graber
Lijuan Wang
KELM
LRM
42
278
0
17 Oct 2022
Assessing Out-of-Domain Language Model Performance from Few Examples
Assessing Out-of-Domain Language Model Performance from Few Examples
Prasann Singhal
Jarad Forristal
Xi Ye
Greg Durrett
LRM
20
5
0
13 Oct 2022
ER-Test: Evaluating Explanation Regularization Methods for Language
  Models
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
30
6
0
25 May 2022
Re-Examining Calibration: The Case of Question Answering
Re-Examining Calibration: The Case of Question Answering
Chenglei Si
Chen Zhao
Sewon Min
Jordan L. Boyd-Graber
59
30
0
25 May 2022
The Unreliability of Explanations in Few-shot Prompting for Textual
  Reasoning
The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning
Xi Ye
Greg Durrett
ReLM
LRM
28
168
0
06 May 2022
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
243
289
0
17 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1