ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.02943
  4. Cited By
Stealing Machine Learning Models via Prediction APIs

Stealing Machine Learning Models via Prediction APIs

9 September 2016
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
    SILM
    MLAU
ArXivPDFHTML

Papers citing "Stealing Machine Learning Models via Prediction APIs"

50 / 344 papers shown
Title
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from
  Black-box Models?
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
21
14
0
21 Jan 2021
Robustness of on-device Models: Adversarial Attack to Deep Learning
  Models on Android Apps
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps
Yujin Huang
Han Hu
Chunyang Chen
AAML
FedML
79
33
0
12 Jan 2021
Model Extraction and Defenses on Generative Adversarial Networks
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILM
MIACV
33
14
0
06 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
35
120
0
05 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Mohamed Bennai
BDL
64
140
0
21 Dec 2020
Confidential Machine Learning on Untrusted Platforms: A Survey
Confidential Machine Learning on Untrusted Platforms: A Survey
Sagar Sharma
Keke Chen
FedML
22
15
0
15 Dec 2020
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
35
13
0
14 Dec 2020
On Lightweight Privacy-Preserving Collaborative Learning for Internet of
  Things by Independent Random Projections
On Lightweight Privacy-Preserving Collaborative Learning for Internet of Things by Independent Random Projections
Linshan Jiang
Rui Tan
Xin Lou
Guosheng Lin
24
12
0
11 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
357
0
07 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial
  Evasion Attack
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
Kamalesh Palanisamy
Vivek Khimani
Moin Hussain Moti
Dimitris Chatzopoulos
27
20
0
09 Nov 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
25
49
0
05 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
Black-Box Ripper: Copying black-box models using generative evolutionary
  algorithms
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
30
43
0
21 Oct 2020
Amnesiac Machine Learning
Amnesiac Machine Learning
Laura Graves
Vineel Nagisetty
Vijay Ganesh
MU
MIACV
27
248
0
21 Oct 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
157
0
08 Sep 2020
Adversarial Watermarking Transformer: Towards Tracing Text Provenance
  with Data Hiding
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
Sahar Abdelnabi
Mario Fritz
WaLM
28
143
0
07 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
41
17
0
02 Sep 2020
POSEIDON: Privacy-Preserving Federated Neural Network Learning
POSEIDON: Privacy-Preserving Federated Neural Network Learning
Sinem Sav
Apostolos Pyrgelis
J. Troncoso-Pastoriza
D. Froelicher
Jean-Philippe Bossuat
João Sá Sousa
Jean-Pierre Hubaux
FedML
21
153
0
01 Sep 2020
Deep-Lock: Secure Authorization for Deep Neural Networks
Deep-Lock: Secure Authorization for Deep Neural Networks
Manaar Alam
Sayandeep Saha
Debdeep Mukhopadhyay
S. Kundu
14
21
0
13 Aug 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
SOTERIA: In Search of Efficient Neural Networks for Private Inference
SOTERIA: In Search of Efficient Neural Networks for Private Inference
Anshul Aggarwal
Trevor E. Carlson
Reza Shokri
Shruti Tople
FedML
27
12
0
25 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
40
221
0
21 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
32
128
0
13 Jul 2020
Quality Inference in Federated Learning with Secure Aggregation
Quality Inference in Federated Learning with Secure Aggregation
Balázs Pejó
G. Biczók
FedML
23
22
0
13 Jul 2020
The Trade-Offs of Private Prediction
The Trade-Offs of Private Prediction
Laurens van der Maaten
Awni Y. Hannun
25
22
0
09 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILM
AAML
17
3
0
02 Jul 2020
Legal Risks of Adversarial Machine Learning Research
Legal Risks of Adversarial Machine Learning Research
Ramnath Kumar
J. Penney
B. Schneier
Kendra Albert
AAML
ELM
SILM
22
8
0
29 Jun 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
39
99
0
23 Jun 2020
AdvMind: Inferring Adversary Intent of Black-Box Attacks
AdvMind: Inferring Adversary Intent of Black-Box Attacks
Ren Pang
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
MLAU
AAML
11
29
0
16 Jun 2020
BoMaNet: Boolean Masking of an Entire Neural Network
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
25
45
0
16 Jun 2020
SPEED: Secure, PrivatE, and Efficient Deep learning
SPEED: Secure, PrivatE, and Efficient Deep learning
Arnaud Grivet Sébert
Rafael Pinot
Martin Zuber
Cédric Gouy-Pailler
Renaud Sirdey
FedML
15
20
0
16 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David Evans
24
147
0
21 May 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
33
83
0
18 May 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
24
5
0
12 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
32
146
0
06 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
36
218
0
05 May 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
27
120
0
30 Apr 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
78
25
0
27 Apr 2020
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep
  Learning
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Sameer Wagh
Shruti Tople
Fabrice Benhamouda
E. Kushilevitz
Prateek Mittal
T. Rabin
FedML
33
295
0
05 Apr 2020
An Overview of Federated Deep Learning Privacy Attacks and Defensive
  Strategies
An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies
David Enthoven
Zaid Al-Ars
FedML
60
50
0
01 Apr 2020
DaST: Data-free Substitute Training for Adversarial Attacks
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
25
142
0
28 Mar 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Learn to Forget: Machine Unlearning via Neuron Masking
Yang Liu
Zhuo Ma
Ximeng Liu
Jian-wei Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
24
61
0
24 Mar 2020
Previous
1234567
Next