Enhancing the Interpretability of SHAP Values Using Large Language Models

Model interpretability is crucial for understanding and trusting the decisions made by complex machine learning models, such as those built with XGBoost. SHAP (SHapley Additive exPlanations) values have become a popular tool for interpreting these models by attributing the output to individual features. However, the technical nature of SHAP explanations often limits their utility to researchers, leaving non-technical end-users struggling to understand the model's behavior. To address this challenge, we explore the use of Large Language Models (LLMs) to translate SHAP value outputs into plain language explanations that are more accessible to non-technical audiences. By applying a pre-trained LLM, we generate explanations that maintain the accuracy of SHAP values while significantly improving their clarity and usability for end users. Our results demonstrate that LLM-enhanced SHAP explanations provide a more intuitive understanding of model predictions, thereby enhancing the overall interpretability of machine learning models. Future work will explore further customization, multimodal explanations, and user feedback mechanisms to refine and expand the approach.
View on arXiv