ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.12200
  4. Cited By
Entangled Watermarks as a Defense against Model Extraction

Entangled Watermarks as a Defense against Model Extraction

27 February 2020
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
    WaLM
    AAML
ArXivPDFHTML

Papers citing "Entangled Watermarks as a Defense against Model Extraction"

11 / 11 papers shown
Title
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
AAML
MIACV
108
0
0
16 Jan 2025
Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation
Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation
Anh Tu Ngo
Chuan Song Heng
Nandish Chattopadhyay
Anupam Chattopadhyay
AAML
368
0
0
06 Jan 2025
Sample Correlation for Fingerprinting Deep Face Recognition
Sample Correlation for Fingerprinting Deep Face Recognition
Jiyang Guan
Jian Liang
Yanbo Wang
Ran He
AAML
105
0
0
31 Dec 2024
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
71
13
0
04 Aug 2022
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
63
56
0
22 May 2019
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
85
757
0
01 Apr 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
138
600
0
15 Feb 2018
UMAP: Uniform Manifold Approximation and Projection for Dimension
  Reduction
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
Leland McInnes
John Healy
James Melville
148
9,312
0
09 Feb 2018
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
223
8,807
0
25 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
88
1,758
0
22 Aug 2017
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
217
14,861
1
21 Dec 2013
1