ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21740
9
0

Counterfactual Simulatability of LLM Explanations for Generation Tasks

27 May 2025
Marvin Limpijankit
Yanda Chen
Melanie Subbiah
Nicholas Deas
Kathleen McKeown
    LRM
ArXiv (abs)PDFHTML
Main:8 Pages
15 Figures
Bibliography:4 Pages
6 Tables
Appendix:17 Pages
Abstract

LLMs can be unpredictable, as even slight alterations to the prompt can cause the output to change in unexpected ways. Thus, the ability of models to accurately explain their behavior is critical, especially in high-stakes settings. One approach for evaluating explanations is counterfactual simulatability, how well an explanation allows users to infer the model's output on related counterfactuals. Counterfactual simulatability has been previously studied for yes/no question answering tasks. We provide a general framework for extending this method to generation tasks, using news summarization and medical suggestion as example use cases. We find that while LLM explanations do enable users to better predict LLM outputs on counterfactuals in the summarization setting, there is significant room for improvement for medical suggestion. Furthermore, our results suggest that the evaluation for counterfactual simulatability may be more appropriate for skill-based tasks as opposed to knowledge-based tasks.

View on arXiv
@article{limpijankit2025_2505.21740,
  title={ Counterfactual Simulatability of LLM Explanations for Generation Tasks },
  author={ Marvin Limpijankit and Yanda Chen and Melanie Subbiah and Nicholas Deas and Kathleen McKeown },
  journal={arXiv preprint arXiv:2505.21740},
  year={ 2025 }
}
Comments on this paper