Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.00830
Cited By
DAWN: Dynamic Adversarial Watermarking of Neural Networks
3 June 2019
S. Szyller
B. Atli
Samuel Marchal
Nadarajah Asokan
MLAU
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DAWN: Dynamic Adversarial Watermarking of Neural Networks"
32 / 32 papers shown
Title
RADEP: A Resilient Adaptive Defense Framework Against Model Extraction Attacks
Amit Chakraborty
Sayyed Farid Ahamed
Sandip Roy
S. Banerjee
Kevin Choi
A. Rahman
Alison Hu
Edward Bowen
Sachin Shetty
AAML
42
0
0
25 May 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
AAML
MIACV
110
0
0
16 Jan 2025
GENIE: Watermarking Graph Neural Networks for Link Prediction
Venkata Sai Pranav Bachina
Ankit Gangwal
Aaryan Ajay Sharma
Charu Sharma
75
1
0
07 Jun 2024
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
64
219
0
27 Feb 2020
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Kalpesh Krishna
Gaurav Singh Tomar
Ankur P. Parikh
Nicolas Papernot
Mohit Iyyer
MIACV
MLAU
102
200
0
27 Oct 2019
Piracy Resistant Watermarks for Deep Neural Networks
Huiying Li
Emily Willson
Shawn Shan
Bing Ye
Shehroz S. Khan
53
26
0
02 Oct 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
D. Song
65
229
0
02 Aug 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
42
164
0
26 Jun 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
130
2,028
0
08 Feb 2019
Knockoff Nets: Stealing Functionality of Black-Box Models
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
MLAU
86
534
0
06 Dec 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
82
790
0
09 Nov 2018
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
82
175
0
14 Jun 2018
Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations
Taesung Lee
Ben Edwards
Ian Molloy
D. Su
AAML
60
41
0
31 May 2018
Do Better ImageNet Models Transfer Better?
Simon Kornblith
Jonathon Shlens
Quoc V. Le
OOD
MLT
153
1,324
0
23 May 2018
PRADA: Protecting against DNN Model Stealing Attacks
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
SILM
AAML
66
442
0
07 May 2018
DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks
Huili Chen
B. Rouhani
F. Koushanfar
FedML
40
61
0
10 Apr 2018
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Yossi Adi
Carsten Baum
Moustapha Cissé
Benny Pinkas
Joseph Keshet
61
674
0
13 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
108
1,833
0
15 Dec 2017
Model Extraction Warning in MLaaS Paradigm
M. Kesarwani
B. Mukhoty
Vijay Arya
S. Mehta
MLAU
44
140
0
20 Nov 2017
Adversarial Frontier Stitching for Remote Neural Network Watermarking
Erwan Le Merrer
P. Pérez
Gilles Trédan
MLAU
AAML
76
337
0
06 Nov 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
96
630
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
96
1,770
0
22 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
275
12,029
0
19 Jun 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
46
1,206
0
25 May 2017
Embedding Watermarks into Deep Neural Networks
Yusuke Uchida
Yuki Nagai
S. Sakazawa
Shiníchi Satoh
109
606
0
15 Jan 2017
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
320
4,624
0
10 Nov 2016
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
102
1,803
0
09 Sep 2016
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
721
36,708
0
25 Aug 2016
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
66
3,676
0
08 Feb 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.0K
193,426
0
10 Dec 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
239
19,017
0
20 Dec 2014
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
109
1,585
0
27 Jun 2012
1