ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.16726
62
1

Verified Relative Safety Margins for Neural Network Twins

25 September 2024
Anahita Baninajjar
Kamran Hosseini
Ahmed Rezine
A. Aminifar
    AAML
ArXiv (abs)PDFHTML
Abstract

Given two Deep Neural Network (DNN) classifiers with the same input and output domains, our goal is to quantify the robustness of the two networks in relation to each other. Towards this, we introduce the notion of Relative Safety Margins (RSMs). Intuitively, given two classes and a common input, RSM of one classifier with respect to another reflects the relative margins with which decisions are made. The proposed notion is relevant in the context of several applications domains, including to compare a trained network and its corresponding compact network (e.g., pruned, quantized, distilled network). Not only can RSMs establish whether decisions are preserved, but they can also quantify their qualities. We also propose a framework to establish safe bounds on RSM gains or losses given an input and a family of perturbations. We evaluate our approach using the MNIST, CIFAR10, and two real-world medical datasets, to show the relevance of our results.

View on arXiv
Comments on this paper