40
0

Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis

Abstract

Interpretability remains a key difficulty in sentiment analysis with Large Language Models (LLMs), particularly in high-stakes applications where it is crucial to comprehend the rationale behind forecasts. This research addressed this by introducing a technique that applies SHAP (Shapley Additive Explanations) by breaking down LLMs into components such as embedding layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge of sentiment prediction. The approach offers a clearer overview of how model interpret and categorise sentiment by breaking down LLMs into these parts. The method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset, which shows how different sentences affect different layers. The effectiveness of layer-wise SHAP analysis in clarifying sentiment-specific token attributions is demonstrated by experimental evaluations, which provide a notable enhancement over current whole-model explainability techniques. These results highlight how the suggested approach could improve the reliability and transparency of LLM-based sentiment analysis in crucial applications.

View on arXiv
@article{thogesan2025_2503.11948,
  title={ Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis },
  author={ Thivya Thogesan and Anupiya Nugaliyadde and Kok Wai Wong },
  journal={arXiv preprint arXiv:2503.11948},
  year={ 2025 }
}
Comments on this paper