ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.07100
  4. Cited By
Defending Against Model Stealing Attacks with Adaptive Misinformation

Defending Against Model Stealing Attacks with Adaptive Misinformation

16 November 2019
Sanjay Kariyappa
Moinuddin K. Qureshi
    MLAU
    AAML
ArXivPDFHTML

Papers citing "Defending Against Model Stealing Attacks with Adaptive Misinformation"

19 / 19 papers shown
Title
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
Kaixiang Zhao
Lincan Li
Kaize Ding
Neil Zhenqiang Gong
Yue Zhao
Yushun Dong
AAML
52
0
0
22 Feb 2025
Protecting Deep Learning Model Copyrights with Adversarial Example-Free
  Reuse Detection
Protecting Deep Learning Model Copyrights with Adversarial Example-Free Reuse Detection
Xiaokun Luan
Xiyue Zhang
Jingyi Wang
Meng Sun
AAML
28
0
0
04 Jul 2024
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
Kaiyi Pang
Tao Qi
Chuhan Wu
Minhao Bai
Minghu Jiang
Yongfeng Huang
AAML
WaLM
72
2
0
03 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Isolation and Induction: Training Robust Deep Neural Networks against
  Model Stealing Attacks
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Jun Guo
Aishan Liu
Xingyu Zheng
Siyuan Liang
Yisong Xiao
Yichao Wu
Xianglong Liu
AAML
38
12
0
02 Aug 2023
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing
  Inference Serving Systems
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems
Debopam Sanyal
Jui-Tse Hung
Manavi Agrawal
Prahlad Jasti
Shahab Nikkhoo
S. Jha
Tianhao Wang
Sibin Mohan
Alexey Tumanov
51
0
0
03 Jul 2023
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
  Scientific Plots
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Boyang Zhang
Xinlei He
Yun Shen
Tianhao Wang
Yang Zhang
AAML
37
2
0
23 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
23
2
0
04 Feb 2023
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
46
12
0
04 Aug 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
57
106
0
16 Jun 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
24
0
15 Jan 2022
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
43
16
0
20 Sep 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
41
236
0
01 Aug 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
25
7
0
21 Jun 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing
  Attack
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
32
146
0
06 May 2020
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,695
0
05 Dec 2016
1