Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.11368
Cited By
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks
22 April 2020
William Aiken
Hyoungshick Kim
Simon S. Woo
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks"
22 / 22 papers shown
Title
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
77
220
0
27 Feb 2020
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
54
45
0
11 Oct 2019
On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Masoumeh Shafieinejad
Jiaqi Wang
Nils Lukas
Xinda Li
Florian Kerschbaum
AAML
50
8
0
18 Jun 2019
How to Prove Your Model Belongs to You: A Blind-Watermark based Framework to Protect Intellectual Property of DNN
Zheng Li
Chengyu Hu
Yang Zhang
Shanqing Guo
AAML
55
172
0
05 Mar 2019
Robust Watermarking of Neural Network with Exponential Weighting
Ryota Namba
Jun Sakuma
AAML
66
138
0
18 Jan 2019
Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques
Dorjan Hitaj
L. Mancini
AAML
64
53
0
03 Sep 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
66
1,033
0
30 May 2018
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Yossi Adi
Carsten Baum
Moustapha Cissé
Benny Pinkas
Joseph Keshet
61
679
0
13 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
224
3,186
0
01 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAML
SILM
143
1,840
0
15 Dec 2017
CryptoDL: Deep Neural Networks over Encrypted Data
Ehsan Hesamifard
Hassan Takabi
Mehdi Ghasemi
55
380
0
14 Nov 2017
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
113
1,059
0
06 Nov 2017
Adversarial Frontier Stitching for Remote Neural Network Watermarking
Erwan Le Merrer
P. Pérez
Gilles Trédan
MLAU
AAML
76
338
0
06 Nov 2017
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
110
790
0
30 Oct 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
127
1,772
0
22 Aug 2017
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
SILM
90
557
0
11 Apr 2017
EMNIST: an extension of MNIST to handwritten letters
Gregory Cohen
Saeed Afshar
J. Tapson
André van Schaik
63
720
0
17 Feb 2017
Embedding Watermarks into Deep Neural Networks
Yusuke Uchida
Yuki Nagai
S. Sakazawa
Shiníchi Satoh
122
609
0
15 Jan 2017
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
158
3,454
0
07 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
107
1,807
0
09 Sep 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,305
0
11 Feb 2015
1