Large language models like GPT, LLAMA, and Claude have become incredibly powerful at generating text, but they are still black boxes, so it is hard to understand how they decide what to say. That lack of transparency can be problematic, especially in fields where trust and accountability matter. To help with this, we introduce SMILE, a new method that explains how these models respond to different parts of a prompt. SMILE is model-agnostic and works by slightly changing the input, measuring how the output changes, and then highlighting which words had the most impact. Create simple visual heat maps showing which parts of a prompt matter the most. We tested SMILE on several leading LLMs and used metrics such as accuracy, consistency, stability, and fidelity to show that it gives clear and reliable explanations. By making these models easier to understand, SMILE brings us one step closer to making AI more transparent and trustworthy.
View on arXiv@article{dehghani2025_2505.21657, title={ Explainability of Large Language Models using SMILE: Statistical Model-agnostic Interpretability with Local Explanations }, author={ Zeinab Dehghani and Koorosh Aslansefat and Adil Khan and Mohammed Naveed Akram }, journal={arXiv preprint arXiv:2505.21657}, year={ 2025 } }