Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.00054
Cited By
Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations
31 May 2018
Taesung Lee
Ben Edwards
Ian Molloy
D. Su
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations"
11 / 11 papers shown
Title
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
Manjushree B. Aithal
Xiaohua Li
AAML
60
6
0
30 Sep 2021
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
25
99
0
09 Mar 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
19
24
0
23 Feb 2021
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
24
5
0
12 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
32
146
0
06 May 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
27
120
0
30 Apr 2020
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
13
218
0
27 Feb 2020
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
145
0
02 Dec 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
33
45
0
11 Oct 2019
1