Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1609.02943
Cited By
Stealing Machine Learning Models via Prediction APIs
9 September 2016
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Stealing Machine Learning Models via Prediction APIs"
50 / 351 papers shown
Title
PrivFL: Practical Privacy-preserving Federated Regressions on High-dimensional Data over Mobile Networks
K. Mandal
G. Gong
FedML
14
71
0
05 Apr 2020
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Sameer Wagh
Shruti Tople
Fabrice Benhamouda
E. Kushilevitz
Prateek Mittal
T. Rabin
FedML
38
296
0
05 Apr 2020
An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies
David Enthoven
Zaid Al-Ars
FedML
60
50
0
01 Apr 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
30
142
0
28 Mar 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Yang Liu
Zhuo Ma
Ximeng Liu
Jian-wei Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
30
61
0
24 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
54
70
0
19 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
66
271
0
07 Mar 2020
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
15
218
0
27 Feb 2020
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
17
53
0
23 Feb 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
39
78
0
18 Feb 2020
How to 0wn NAS in Your Spare Time
Sanghyun Hong
Michael Davinroy
Yigitcan Kaya
Dana Dachman-Soled
Tudor Dumitras
33
36
0
17 Feb 2020
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAML
SILM
29
232
0
04 Feb 2020
On the Resilience of Biometric Authentication Systems against Random Inputs
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
39
23
0
13 Jan 2020
Assessing differentially private deep learning with Membership Inference
Daniel Bernau
Philip-William Grassal
J. Robl
Florian Kerschbaum
MIACV
FedML
26
23
0
24 Dec 2019
Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker
Nicholas Roberts
Vinay Uday Prabhu
Matthew McAteer
24
18
0
19 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
145
0
02 Dec 2019
Privacy Leakage Avoidance with Switching Ensembles
R. Izmailov
Peter Lin
Chris Mesterharm
S. Basu
27
2
0
18 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
47
68
0
06 Nov 2019
Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks
Y. Sagduyu
Yi Shi
T. Erpek
AAML
33
83
0
01 Nov 2019
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
19
106
0
28 Oct 2019
Secure Evaluation of Quantized Neural Networks
Anders Dalskov
Daniel E. Escudero
Marcel Keller
22
138
0
28 Oct 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
33
45
0
11 Oct 2019
Representation Learning for Electronic Health Records
W. Weng
Peter Szolovits
36
19
0
19 Sep 2019
VeriML: Enabling Integrity Assurances and Fair Payments for Machine Learning as a Service
Lingchen Zhao
Qian Wang
Cong Wang
Qi Li
Chao Shen
Xiaodong Lin
Bo Feng
Minxin Du
VLM
13
86
0
16 Sep 2019
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
30
23
0
27 Aug 2019
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems
Lea Schonherr
Thorsten Eisenhofer
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
54
63
0
05 Aug 2019
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems
Alessandro Erba
Riccardo Taormina
S. Galelli
Marcello Pogliani
Michele Carminati
S. Zanero
Nils Ole Tippenhauer
AAML
28
22
0
17 Jul 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
19
164
0
26 Jun 2019
Adversarial Examples to Fool Iris Recognition Systems
Sobhan Soleymani
Ali Dabouei
J. Dawson
Nasser M. Nasrabadi
GAN
AAML
24
16
0
21 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
22
10
0
15 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
27
18
0
15 Jun 2019
Adversarial Exploitation of Policy Imitation
Vahid Behzadan
W. Hsu
27
24
0
03 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
22
211
0
03 Jun 2019
BAYHENN: Combining Bayesian Deep Learning and Homomorphic Encryption for Secure DNN Inference
Peichen Xie
Bingzhe Wu
Guangyu Sun
BDL
FedML
19
33
0
03 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
Misleading Authorship Attribution of Source Code using Adversarial Learning
Erwin Quiring
Alwin Maier
Konrad Rieck
19
107
0
29 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
19
228
0
23 May 2019
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
36
56
0
22 May 2019
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
Differentially Private Model Publishing for Deep Learning
Lei Yu
Ling Liu
C. Pu
Mehmet Emre Gursoy
Stacey Truex
FedML
15
264
0
03 Apr 2019
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
25
157
0
23 Mar 2019
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
Xing Hu
Ling Liang
Lei Deng
Shuangchen Li
Xinfeng Xie
Yu Ji
Yufei Ding
Chang Liu
T. Sherwood
Yuan Xie
AAML
MLAU
23
36
0
10 Mar 2019
Copying Machine Learning Classifiers
Irene Unceta
Jordi Nin
O. Pujol
14
18
0
05 Mar 2019
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
41
155
0
01 Mar 2019
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David Evans
15
7
0
24 Feb 2019
XONN: XNOR-based Oblivious Deep Neural Network Inference
M. Riazi
Mohammad Samragh
Hao Chen
Kim Laine
Kristin E. Lauter
F. Koushanfar
FedML
GNN
BDL
31
280
0
19 Feb 2019
On the security relevance of weights in deep learning
Kathrin Grosse
T. A. Trost
Marius Mosbach
Michael Backes
Dietrich Klakow
AAML
32
6
0
08 Feb 2019
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
29
31
0
04 Jan 2019
Stealing Neural Networks via Timing Side Channels
Vasisht Duddu
D. Samanta
D. V. Rao
V. Balas
AAML
MLAU
FedML
33
133
0
31 Dec 2018
Previous
1
2
3
4
5
6
7
8
Next