Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1609.02943
Cited By
Stealing Machine Learning Models via Prediction APIs
9 September 2016
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Stealing Machine Learning Models via Prediction APIs"
50 / 344 papers shown
Title
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
46
12
0
04 Aug 2022
Generative Extraction of Audio Classifiers for Speaker Identification
Tejumade Afonja
Lucas Bourtoule
Varun Chandrasekaran
Sageev Oore
Nicolas Papernot
AAML
15
1
0
26 Jul 2022
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
Machine Learning Security in Industry: A Quantitative Survey
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Battista Biggio
Katharina Krombholz
45
32
0
11 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
57
106
0
16 Jun 2022
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
45
132
0
15 Jun 2022
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
52
8
0
14 Jun 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
31
5
0
11 Jun 2022
OmniXAI: A Library for Explainable AI
Wenzhuo Yang
Hung Le
Tanmay Laud
Silvio Savarese
Guosheng Lin
SyDa
35
39
0
01 Jun 2022
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
63
14
0
29 May 2022
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
34
4
0
26 May 2022
The Opportunity to Regulate Cybersecurity in the EU (and the World): Recommendations for the Cybersecurity Resilience Act
K. Ludvigsen
Shishir Nagaraja
14
2
0
26 May 2022
VeriFi: Towards Verifiable Federated Unlearning
Xiangshan Gao
Xingjun Ma
Jingyi Wang
Youcheng Sun
Bo Li
S. Ji
Peng Cheng
Jiming Chen
MU
75
46
0
25 May 2022
Learning to Reverse DNNs from AI Programs Automatically
Simin Chen
Hamed Khanpour
Cong Liu
Wei Yang
40
15
0
20 May 2022
On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic
Nikita Dhawan
Muhammad Ahmad Kaleem
Jonas Guan
Nicolas Papernot
MIACV
59
22
0
16 May 2022
Impala: Low-Latency, Communication-Efficient Private Deep Learning Inference
Woojin Choi
Brandon Reagen
Gu-Yeon Wei
David Brooks
FedML
60
7
0
13 May 2022
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
H. Chabanne
Vincent Despiegel
Linda Guiga
22
0
0
05 May 2022
Special Session: Towards an Agile Design Methodology for Efficient, Reliable, and Secure ML Systems
Shail Dave
Alberto Marchisio
Muhammad Abdullah Hanif
Amira Guesmi
Aviral Shrivastava
Ihsen Alouani
Mohamed Bennai
39
13
0
18 Apr 2022
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive Conditions
M. Rigaki
Sebastian Garcia
AAML
25
10
0
13 Apr 2022
MixNN: A design for protecting deep learning models
Chao Liu
Hao Chen
Yusen Wu
Rui Jin
12
0
0
28 Mar 2022
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
Sam Leroux
Pieter Simoens
Meelis Lootus
Kartik Thakore
Akshay Sharma
37
16
0
21 Mar 2022
The Dark Side: Security Concerns in Machine Learning for EDA
Zhiyao Xie
Jingyu Pan
Chen-Chia Chang
Yiran Chen
16
4
0
20 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
60
29
0
14 Mar 2022
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Qilong Zhang
Chaoning Zhang
Chaoning Zhang
Chaoqun Li
Xuanhan Wang
Jingkuan Song
Lianli Gao
AAML
41
21
0
09 Mar 2022
Margin-distancing for safe model explanation
Tom Yan
Chicheng Zhang
28
3
0
23 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems
Mohamad Fazelnia
I. Khokhlov
Mehdi Mirakhorli
AAML
26
5
0
18 Feb 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAML
FedML
36
66
0
17 Feb 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
35
30
0
10 Feb 2022
Over-the-Air Ensemble Inference with Model Privacy
Selim F. Yilmaz
Burak Hasircioglu
Deniz Gunduz
FedML
40
23
0
07 Feb 2022
Training Differentially Private Models with Secure Multiparty Computation
Sikha Pentyala
Davis Railsback
Ricardo Maia
Rafael Dowsley
David Melanson
Anderson C. A. Nascimento
Martine De Cock
21
14
0
05 Feb 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
23
53
0
27 Jan 2022
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
47
60
0
23 Jan 2022
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
Xudong Pan
Yifan Yan
Mi Zhang
Min Yang
27
23
0
19 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
24
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
44
22
0
12 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
24
60
0
15 Dec 2021
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
Protecting Intellectual Property of Language Generation APIs with Lexical Watermark
Xuanli He
Qiongkai Xu
Lingjuan Lyu
Fangzhao Wu
Chenguang Wang
WaLM
180
95
0
05 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
50
42
0
01 Dec 2021
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users
Jiyi Zhang
Hansheng Fang
W. Tann
Ke Xu
Chengfang Fang
E. Chang
AAML
38
3
0
30 Nov 2021
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
41
53
0
19 Nov 2021
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
35
52
0
15 Nov 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
30
28
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
Optimizing Secure Decision Tree Inference Outsourcing
Yifeng Zheng
Cong Wang
Ruochen Wang
Huayi Duan
Surya Nepal
21
6
0
31 Oct 2021
Bandwidth Utilization Side-Channel on ML Inference Accelerators
Sarbartha Banerjee
Shijia Wei
Prakash Ramrakhyani
Mohit Tiwari
31
3
0
14 Oct 2021
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Tian Dong
Han Qiu
Tianwei Zhang
Jiwei Li
Hewu Li
Jialiang Lu
AAML
39
8
0
07 Oct 2021
Previous
1
2
3
4
5
6
7
Next