ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.08602
85
68
v1v2v3 (latest)

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

17 February 2022
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
    AAMLFedML
ArXiv (abs)PDFHTML
Abstract

In this paper, we propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks. Our key insight is that the profile of a DNN model's decision boundary can be uniquely characterized by its \textit{Universal Adversarial Perturbations (UAPs)}. UAPs belong to a low-dimensional subspace and piracy models' subspaces are more consistent with victim model's subspace compared with non-piracy model. Based on this, we propose a UAP fingerprinting method for DNN models and train an encoder via \textit{contrastive learning} that takes fingerprint as inputs, outputs a similarity score. Extensive studies show that our framework can detect model IP breaches with confidence >99.99%> 99.99 \%>99.99% within only 202020 fingerprints of the suspect model. It has good generalizability across different model architectures and is robust against post-modifications on stolen models.

View on arXiv
Comments on this paper