Probabilistic Stability Guarantees for Feature Attributions

Stability guarantees have emerged as a principled way to evaluate feature attributions, but existing certification methods rely on heavily smoothed classifiers and often produce conservative guarantees. To address these limitations, we introduce soft stability and propose a simple, model-agnostic, sample-efficient stability certification algorithm (SCA) that yields non-trivial and interpretable guarantees for any attribution method. Moreover, we show that mild smoothing achieves a more favorable trade-off between accuracy and stability, avoiding the aggressive compromises made in prior certification methods. To explain this behavior, we use Boolean function analysis to derive a novel characterization of stability under smoothing. We evaluate SCA on vision and language tasks and demonstrate the effectiveness of soft stability in measuring the robustness of explanation methods.
View on arXiv@article{jin2025_2504.13787, title={ Probabilistic Stability Guarantees for Feature Attributions }, author={ Helen Jin and Anton Xue and Weiqiu You and Surbhi Goel and Eric Wong }, journal={arXiv preprint arXiv:2504.13787}, year={ 2025 } }