64
0

Learning to Explain: Prototype-Based Surrogate Models for LLM Classification

Abstract

Large language models (LLMs) have demonstrated impressive performance on natural language tasks, but their decision-making processes remain largely opaque. Existing explanation methods either suffer from limited faithfulness to the model's reasoning or produce explanations that humans find difficult to understand. To address these challenges, we propose \textbf{ProtoSurE}, a novel prototype-based surrogate framework that provides faithful and human-understandable explanations for LLMs. ProtoSurE trains an interpretable-by-design surrogate model that aligns with the target LLM while utilizing sentence-level prototypes as human-understandable concepts. Extensive experiments show that ProtoSurE consistently outperforms SOTA explanation methods across diverse LLMs and datasets. Importantly, ProtoSurE demonstrates strong data efficiency, requiring relatively few training examples to achieve good performance, making it practical for real-world applications.

View on arXiv
@article{wei2025_2505.18970,
  title={ Learning to Explain: Prototype-Based Surrogate Models for LLM Classification },
  author={ Bowen Wei and Mehrdad Fazli and Ziwei Zhu },
  journal={arXiv preprint arXiv:2505.18970},
  year={ 2025 }
}
Comments on this paper