7
0

TriGuard: Testing Model Safety with Attribution Entropy, Verification, and Drift

Main:7 Pages
6 Figures
Bibliography:2 Pages
6 Tables
Appendix:3 Pages
Abstract

Deep neural networks often achieve high accuracy, but ensuring their reliability under adversarial and distributional shifts remains a pressing challenge. We propose TriGuard, a unified safety evaluation framework that combines (1) formal robustness verification, (2) attribution entropy to quantify saliency concentration, and (3) a novel Attribution Drift Score measuring explanation stability. TriGuard reveals critical mismatches between model accuracy and interpretability: verified models can still exhibit unstable reasoning, and attribution-based signals provide complementary safety insights beyond adversarial accuracy. Extensive experiments across three datasets and five architectures show how TriGuard uncovers subtle fragilities in neural reasoning. We further demonstrate that entropy-regularized training reduces explanation drift without sacrificing performance. TriGuard advances the frontier in robust, interpretable model evaluation.

View on arXiv
@article{mahato2025_2506.14217,
  title={ TriGuard: Testing Model Safety with Attribution Entropy, Verification, and Drift },
  author={ Dipesh Tharu Mahato and Rohan Poudel and Pramod Dhungana },
  journal={arXiv preprint arXiv:2506.14217},
  year={ 2025 }
}
Comments on this paper