ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.02766
  4. Cited By
Knockoff Nets: Stealing Functionality of Black-Box Models

Knockoff Nets: Stealing Functionality of Black-Box Models

6 December 2018
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
    MLAU
ArXivPDFHTML

Papers citing "Knockoff Nets: Stealing Functionality of Black-Box Models"

50 / 107 papers shown
Title
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
45
106
0
16 Jun 2022
On the Difficulty of Defending Self-Supervised Learning against Model
  Extraction
On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic
Nikita Dhawan
Muhammad Ahmad Kaleem
Jonas Guan
Nicolas Papernot
MIACV
54
22
0
16 May 2022
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
H. Chabanne
Vincent Despiegel
Linda Guiga
22
0
0
05 May 2022
Stealing and Evading Malware Classifiers and Antivirus at Low False
  Positive Conditions
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive Conditions
M. Rigaki
Sebastian Garcia
AAML
25
10
0
13 Apr 2022
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
Chi Hong
Jiyue Huang
L. Chen
27
2
0
31 Jan 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
  Encoders
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
21
53
0
27 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
25
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
37
21
0
12 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
19
60
0
15 Dec 2021
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Mitigating Adversarial Attacks by Distributing Different Copies to
  Different Users
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users
Jiyi Zhang
Hansheng Fang
W. Tann
Ke Xu
Chengfang Fang
E. Chang
AAML
26
3
0
30 Nov 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and
  Future Opportunities
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
Waddah Saeed
C. Omlin
XAI
36
414
0
11 Nov 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
20
28
0
08 Nov 2021
First to Possess His Statistics: Data-Free Model Extraction Attack on
  Tabular Data
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data
Masataka Tasumi
Kazuki Iwahana
Naoto Yanai
Katsunari Shishido
Toshiya Shimizu
Yuji Higuchi
I. Morikawa
Jun Yajima
AAML
28
4
0
30 Sep 2021
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs
Qiongkai Xu
Xuanli He
Lingjuan Lyu
Lizhen Qu
Gholamreza Haffari
MLAU
40
22
0
29 Aug 2021
SoK: How Robust is Image Classification Deep Neural Network
  Watermarking? (Extended Version)
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Nils Lukas
Edward Jiang
Xinda Li
Florian Kerschbaum
AAML
36
87
0
11 Aug 2021
Exploring Structure Consistency for Deep Model Watermarking
Exploring Structure Consistency for Deep Model Watermarking
Jie Zhang
Dongdong Chen
Jing Liao
Han Fang
Zehua Ma
Weiming Zhang
G. Hua
Nenghai Yu
AAML
24
4
0
05 Aug 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
23
71
0
04 Jul 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
25
7
0
21 Jun 2021
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in
  Deep Neural Networks
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Suyoung Lee
Wonho Song
Suman Jana
M. Cha
Sooel Son
AAML
19
13
0
18 Jun 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing
  Attack
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
Delving into Data: Effectively Substitute Training for Black-box Attack
Delving into Data: Effectively Substitute Training for Black-box Attack
Wenxuan Wang
Bangjie Yin
Taiping Yao
Li Zhang
Yanwei Fu
Shouhong Ding
Jilin Li
Feiyue Huang
Xiangyang Xue
AAML
60
63
0
26 Apr 2021
Model Extraction and Adversarial Transferability, Your BERT is
  Vulnerable!
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!
Xuanli He
Lingjuan Lyu
Qiongkai Xu
Lichao Sun
MIACV
SILM
33
90
0
18 Mar 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
25
99
0
09 Mar 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from
  Black-box Models?
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
13
14
0
21 Jan 2021
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
27
142
0
14 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
Black-Box Ripper: Copying black-box models using generative evolutionary
  algorithms
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
30
43
0
21 Oct 2020
Amnesiac Machine Learning
Amnesiac Machine Learning
Laura Graves
Vineel Nagisetty
Vijay Ganesh
MU
MIACV
21
245
0
21 Oct 2020
Knowledge-Enriched Distributional Model Inversion Attacks
Knowledge-Enriched Distributional Model Inversion Attacks
Si-An Chen
Mostafa Kahla
R. Jia
Guo-Jun Qi
24
93
0
08 Oct 2020
Adversarial Watermarking Transformer: Towards Tracing Text Provenance
  with Data Hiding
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
Sahar Abdelnabi
Mario Fritz
WaLM
28
89
0
07 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
41
17
0
02 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
33
99
0
23 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
25
83
0
18 May 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
24
5
0
12 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
29
146
0
06 May 2020
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Imitation Attacks and Defenses for Black-box Machine Translation Systems
Eric Wallace
Mitchell Stern
D. Song
AAML
19
119
0
30 Apr 2020
DaST: Data-free Substitute Training for Adversarial Attacks
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
22
142
0
28 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
39
270
0
07 Mar 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine
  Learning Model Protection in Mobile Apps
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
36
78
0
18 Feb 2020
Segmentations-Leak: Membership Inference Attacks and Defenses in
  Semantic Image Segmentation
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
Yang He
Shadi Rahimian
Bernt Schiele
Mario Fritz
MIACV
21
49
0
20 Dec 2019
Previous
123
Next