ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.02628
  4. Cited By
PRADA: Protecting against DNN Model Stealing Attacks

PRADA: Protecting against DNN Model Stealing Attacks

7 May 2018
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
    SILM
    AAML
ArXivPDFHTML

Papers citing "PRADA: Protecting against DNN Model Stealing Attacks"

50 / 84 papers shown
Title
StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style Transformation
StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style Transformation
Shenyang Liu
Yang Gao
Shaoyan Zhai
Liqiang Wang
40
0
0
06 Apr 2025
Attackers Can Do Better: Over- and Understated Factors of Model Stealing Attacks
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
AAML
49
0
0
08 Mar 2025
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
Kaixiang Zhao
Lincan Li
Kaize Ding
Neil Zhenqiang Gong
Yue Zhao
Yushun Dong
AAML
52
0
0
22 Feb 2025
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
148
0
0
21 Feb 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
Zhihong Tian
MIACV
AAML
73
0
0
20 Jan 2025
AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial
  Contrastive Prompt Tuning
AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning
Xin Wang
Kai-xiang Chen
Xingjun Ma
Zhineng Chen
Jingjing Chen
Yu-Gang Jiang
AAML
48
4
0
04 Aug 2024
Protecting Deep Learning Model Copyrights with Adversarial Example-Free
  Reuse Detection
Protecting Deep Learning Model Copyrights with Adversarial Example-Free Reuse Detection
Xiaokun Luan
Xiyue Zhang
Jingyi Wang
Meng Sun
AAML
28
0
0
04 Jul 2024
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
Kaiyi Pang
Tao Qi
Chuhan Wu
Minhao Bai
Minghu Jiang
Yongfeng Huang
AAML
WaLM
72
2
0
03 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
The Adaptive Arms Race: Redefining Robustness in AI Security
The Adaptive Arms Race: Redefining Robustness in AI Security
Ilias Tsingenopoulos
Vera Rimmer
Davy Preuveneers
Fabio Pierazzi
Lorenzo Cavallaro
Wouter Joosen
AAML
85
0
0
20 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
49
2
0
07 Dec 2023
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Jan Dubiñski
Stanislaw Pawlak
Franziska Boenisch
Tomasz Trzciñski
Adam Dziedzic
AAML
36
3
0
12 Oct 2023
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Vlad Hondru
Radu Tudor Ionescu
DiffM
50
1
0
29 Sep 2023
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing
  Inference Serving Systems
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems
Debopam Sanyal
Jui-Tse Hung
Manavi Agrawal
Prahlad Jasti
Shahab Nikkhoo
S. Jha
Tianhao Wang
Sibin Mohan
Alexey Tumanov
51
0
0
03 Jul 2023
The False Promise of Imitating Proprietary LLMs
The False Promise of Imitating Proprietary LLMs
Arnav Gudibande
Eric Wallace
Charles Burton Snell
Xinyang Geng
Hao Liu
Pieter Abbeel
Sergey Levine
Dawn Song
ALM
44
199
0
25 May 2023
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
Asim Waheed
Vasisht Duddu
Nadarajah Asokan
40
9
0
17 Apr 2023
Robust and IP-Protecting Vertical Federated Learning against Unexpected
  Quitting of Parties
Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties
Jingwei Sun
Zhixu Du
Anna Dai
Saleh Baghersalimi
Alireza Amirshahi
David Atienza
Yiran Chen
FedML
21
8
0
28 Mar 2023
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign
  Recognition: A Feasibility Study
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study
Fabian Woitschek
G. Schneider
AAML
38
9
0
27 Feb 2023
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
  Scientific Plots
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Boyang Zhang
Xinlei He
Yun Shen
Tianhao Wang
Yang Zhang
AAML
34
2
0
23 Feb 2023
Digital Privacy Under Attack: Challenges and Enablers
Digital Privacy Under Attack: Challenges and Enablers
Baobao Song
Mengyue Deng
Shiva Raj Pokhrel
Qiujun Lan
R. Doss
Gang Li
AAML
39
3
0
18 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
23
2
0
04 Feb 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
29
4
0
18 Jan 2023
Learned Systems Security
Learned Systems Security
R. Schuster
Jinyi Zhou
Thorsten Eisenhofer
Paul Grubbs
Nicolas Papernot
AAML
14
2
0
20 Dec 2022
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
33
4
0
20 Oct 2022
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API
  Predictions
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
Lingjiao Chen
Zhihua Jin
Sabri Eyuboglu
Christopher Ré
Matei A. Zaharia
James Zou
53
9
0
18 Sep 2022
Dataset Inference for Self-Supervised Models
Dataset Inference for Self-Supervised Models
Adam Dziedzic
Haonan Duan
Muhammad Ahmad Kaleem
Nikita Dhawan
Jonas Guan
Yannis Cattan
Franziska Boenisch
Nicolas Papernot
37
26
0
16 Sep 2022
Customized Watermarking for Deep Neural Networks via Label Distribution
  Perturbation
Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation
Tzu-Yun Chien
Chih-Ya Shen
AAML
23
0
0
10 Aug 2022
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
46
12
0
04 Aug 2022
Generative Extraction of Audio Classifiers for Speaker Identification
Generative Extraction of Audio Classifiers for Speaker Identification
Tejumade Afonja
Lucas Bourtoule
Varun Chandrasekaran
Sageev Oore
Nicolas Papernot
AAML
15
1
0
26 Jul 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
Machine Learning Security in Industry: A Quantitative Survey
Machine Learning Security in Industry: A Quantitative Survey
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Battista Biggio
Katharina Krombholz
40
32
0
11 Jul 2022
Hercules: Boosting the Performance of Privacy-preserving Federated
  Learning
Hercules: Boosting the Performance of Privacy-preserving Federated Learning
Guowen Xu
Xingshuo Han
Shengmin Xu
Tianwei Zhang
Hongwei Li
Xinyi Huang
R. Deng
FedML
35
16
0
11 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
54
106
0
16 Jun 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
On the Difficulty of Defending Self-Supervised Learning against Model
  Extraction
On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic
Nikita Dhawan
Muhammad Ahmad Kaleem
Jonas Guan
Nicolas Papernot
MIACV
56
22
0
16 May 2022
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
Sam Leroux
Pieter Simoens
Meelis Lootus
Kartik Thakore
Akshay Sharma
37
16
0
21 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial
  Perturbations
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAML
FedML
31
66
0
17 Feb 2022
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
Chi Hong
Jiyue Huang
L. Chen
27
2
0
31 Jan 2022
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
Xudong Pan
Yifan Yan
Mi Zhang
Min Yang
27
23
0
19 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
24
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
24
60
0
15 Dec 2021
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
Protecting Intellectual Property of Language Generation APIs with
  Lexical Watermark
Protecting Intellectual Property of Language Generation APIs with Lexical Watermark
Xuanli He
Qiongkai Xu
Lingjuan Lyu
Fangzhao Wu
Chenguang Wang
WaLM
177
95
0
05 Dec 2021
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based
  Repeated Bit Flip Attack
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Dahoon Park
K. Kwon
Sunghoon Im
Jaeha Kung
AAML
16
3
0
01 Nov 2021
Generating Natural Language Adversarial Examples through An Improved
  Beam Search Algorithm
Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm
Tengfei Zhao
Zhaocheng Ge
Han Hu
Di Shi
AAML
35
3
0
15 Oct 2021
Bugs in our Pockets: The Risks of Client-Side Scanning
Bugs in our Pockets: The Risks of Client-Side Scanning
H. Abelson
Ross J. Anderson
S. Bellovin
Josh Benaloh
M. Blaze
...
Ronald L. Rivest
J. Schiller
B. Schneier
Vanessa J. Teague
Carmela Troncoso
69
39
0
14 Oct 2021
First to Possess His Statistics: Data-Free Model Extraction Attack on
  Tabular Data
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data
Masataka Tasumi
Kazuki Iwahana
Naoto Yanai
Katsunari Shishido
Toshiya Shimizu
Yuji Higuchi
I. Morikawa
Jun Yajima
AAML
30
4
0
30 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
40
16
0
20 Sep 2021
12
Next