ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.10908
  4. Cited By
Prediction Poisoning: Towards Defenses Against DNN Model Stealing
  Attacks

Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks

26 June 2019
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
    AAML
ArXivPDFHTML

Papers citing "Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks"

35 / 35 papers shown
Title
Attackers Can Do Better: Over- and Understated Factors of Model Stealing Attacks
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
AAML
49
0
0
08 Mar 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
Zhihong Tian
MIACV
AAML
73
0
0
20 Jan 2025
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
Kaiyi Pang
Tao Qi
Chuhan Wu
Minhao Bai
Minghu Jiang
Yongfeng Huang
AAML
WaLM
72
2
0
03 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Model Stealing Attack against Recommender System
Model Stealing Attack against Recommender System
Zhihao Zhu
Rui Fan
Chenwang Wu
Yi Yang
Defu Lian
Enhong Chen
AAML
27
2
0
18 Dec 2023
Isolation and Induction: Training Robust Deep Neural Networks against
  Model Stealing Attacks
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Jun Guo
Aishan Liu
Xingyu Zheng
Siyuan Liang
Yisong Xiao
Yichao Wu
Xianglong Liu
AAML
38
12
0
02 Aug 2023
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing
  Inference Serving Systems
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems
Debopam Sanyal
Jui-Tse Hung
Manavi Agrawal
Prahlad Jasti
Shahab Nikkhoo
S. Jha
Tianhao Wang
Sibin Mohan
Alexey Tumanov
51
0
0
03 Jul 2023
The False Promise of Imitating Proprietary LLMs
The False Promise of Imitating Proprietary LLMs
Arnav Gudibande
Eric Wallace
Charles Burton Snell
Xinyang Geng
Hao Liu
Pieter Abbeel
Sergey Levine
Dawn Song
ALM
44
199
0
25 May 2023
Robust and IP-Protecting Vertical Federated Learning against Unexpected
  Quitting of Parties
Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties
Jingwei Sun
Zhixu Du
Anna Dai
Saleh Baghersalimi
Alireza Amirshahi
David Atienza
Yiran Chen
FedML
21
8
0
28 Mar 2023
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
  Scientific Plots
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Boyang Zhang
Xinlei He
Yun Shen
Tianhao Wang
Yang Zhang
AAML
34
2
0
23 Feb 2023
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API
  Predictions
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
Lingjiao Chen
Zhihua Jin
Sabri Eyuboglu
Christopher Ré
Matei A. Zaharia
James Zou
53
9
0
18 Sep 2022
Dataset Inference for Self-Supervised Models
Dataset Inference for Self-Supervised Models
Adam Dziedzic
Haonan Duan
Muhammad Ahmad Kaleem
Nikita Dhawan
Jonas Guan
Yannis Cattan
Franziska Boenisch
Nicolas Papernot
37
26
0
16 Sep 2022
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
40
12
0
18 Aug 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
54
106
0
16 Jun 2022
On the Difficulty of Defending Self-Supervised Learning against Model
  Extraction
On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic
Nikita Dhawan
Muhammad Ahmad Kaleem
Jonas Guan
Nicolas Papernot
MIACV
56
22
0
16 May 2022
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
Sam Leroux
Pieter Simoens
Meelis Lootus
Kartik Thakore
Akshay Sharma
37
16
0
21 Mar 2022
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research
  Directions
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research Directions
Chamara Sandeepa
Bartlomiej Siniarski
N. Kourtellis
Shen Wang
Madhusanka Liyanage
31
21
0
08 Mar 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
24
0
15 Jan 2022
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Tian Dong
Han Qiu
Tianwei Zhang
Jiwei Li
Hewu Li
Jialiang Lu
AAML
39
8
0
07 Oct 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
40
16
0
20 Sep 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
38
236
0
01 Aug 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based
  Explainable AI
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
24
37
0
19 Jul 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
25
7
0
21 Jun 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing
  Attack
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
24
43
0
16 Feb 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly
  Deploying Adversarially-Disjoint Models
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
27
7
0
09 Feb 2021
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
27
142
0
14 Dec 2020
A Distributed Privacy-Preserving Learning Dynamics in General Social
  Networks
A Distributed Privacy-Preserving Learning Dynamics in General Social Networks
Youming Tao
Shuzhen Chen
Feng Li
Dongxiao Yu
Jiguo Yu
Hao Sheng
FedML
19
3
0
15 Nov 2020
A Systematic Review on Model Watermarking for Neural Networks
A Systematic Review on Model Watermarking for Neural Networks
Franziska Boenisch
AAML
11
64
0
25 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
41
17
0
02 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
32
146
0
06 May 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
24
120
0
30 Apr 2020
Slalom: Fast, Verifiable and Private Execution of Neural Networks in
  Trusted Hardware
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Florian Tramèr
Dan Boneh
FedML
114
395
0
08 Jun 2018
1