Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.02628
Cited By
PRADA: Protecting against DNN Model Stealing Attacks
7 May 2018
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
SILM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PRADA: Protecting against DNN Model Stealing Attacks"
35 / 85 papers shown
Title
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
33
31
0
01 Sep 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
29
37
0
19 Jul 2021
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
33
71
0
04 Jul 2021
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
38
81
0
30 Jun 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
25
7
0
21 Jun 2021
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Suyoung Lee
Wonho Song
Suman Jana
M. Cha
Sooel Son
AAML
27
13
0
18 Jun 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
27
34
0
03 May 2021
The Duo of Artificial Intelligence and Big Data for Industry 4.0: Review of Applications, Techniques, Challenges, and Future Research Directions
Senthil Kumar Jagatheesaperumal
Mohamed Rahouti
Kashif Ahmad
Ala I. Al-Fuqaha
Mohsen Guizani
AI4CE
22
19
0
06 Apr 2021
A survey of deep neural network watermarking techniques
Yue Li
Hongxia Wang
Mauro Barni
34
141
0
16 Mar 2021
Learner-Private Convex Optimization
Jiaming Xu
Kuang Xu
Dana Yang
FedML
19
2
0
23 Feb 2021
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILM
MIACV
33
14
0
06 Jan 2021
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
32
142
0
14 Dec 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
30
43
0
21 Oct 2020
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
25
45
0
16 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
24
5
0
12 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
32
146
0
06 May 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
27
120
0
30 Apr 2020
ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic Convolution for Privacy-Preserving Visual Recognition
S. Bian
Tianchen Wang
Masayuki Hiromoto
Yiyu Shi
Takashi Sato
FedML
29
30
0
11 Mar 2020
NASS: Optimizing Secure Inference via Neural Architecture Search
S. Bian
Weiwen Jiang
Qing Lu
Yiyu Shi
Takashi Sato
21
25
0
30 Jan 2020
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
145
0
02 Dec 2019
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
39
68
0
06 Nov 2019
RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks
Tianhao Wang
Florian Kerschbaum
AAML
27
36
0
31 Oct 2019
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
19
106
0
28 Oct 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
33
45
0
11 Oct 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
19
164
0
26 Jun 2019
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
36
56
0
22 May 2019
Stealing Neural Networks via Timing Side Channels
Vasisht Duddu
D. Samanta
D. V. Rao
V. Balas
AAML
MLAU
FedML
33
133
0
31 Dec 2018
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
33
287
0
16 Aug 2018
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
OffRL
22
103
0
01 Aug 2018
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
368
5,849
0
08 Jul 2016
Previous
1
2