A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches

This paper compares model-agnostic and model-specific approaches to explainable AI (XAI) in deep learning image classification. I examine how LIME and SHAP (model-agnostic methods) differ from Grad-CAM and Guided Backpropagation (model-specific methods) when interpreting ResNet50 predictions across diverse image categories. Through extensive testing with various species from dogs and birds to insects I found that each method reveals different aspects of the models decision-making process. Model-agnostic techniques provide broader feature attribution that works across different architectures, while model-specific approaches excel at highlighting precise activation regions with greater computational efficiency. My analysis shows there is no "one-size-fits-all" solution for model interpretability. Instead, combining multiple XAI methods offers the most comprehensive understanding of complex models particularly valuable in high-stakes domains like healthcare, autonomous vehicles, and financial services where transparency is crucial. This comparative framework provides practical guidance for selecting appropriate interpretability techniques based on specific application needs and computational constraints.
View on arXiv@article{devireddy2025_2504.04276, title={ A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches }, author={ Keerthi Devireddy }, journal={arXiv preprint arXiv:2504.04276}, year={ 2025 } }