ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.02766
  4. Cited By
Knockoff Nets: Stealing Functionality of Black-Box Models

Knockoff Nets: Stealing Functionality of Black-Box Models

6 December 2018
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
    MLAU
ArXivPDFHTML

Papers citing "Knockoff Nets: Stealing Functionality of Black-Box Models"

50 / 104 papers shown
Title
Attackers Can Do Better: Over- and Understated Factors of Model Stealing Attacks
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
AAML
49
0
0
08 Mar 2025
Examining the Threat Landscape: Foundation Models and Model Stealing
Examining the Threat Landscape: Foundation Models and Model Stealing
Ankita Raj
Deepankar Varma
Chetan Arora
AAML
78
1
0
25 Feb 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
Zhihong Tian
MIACV
AAML
65
0
0
20 Jan 2025
Neural Interactive Proofs
Neural Interactive Proofs
Lewis Hammond
Sam Adam-Day
AAML
89
2
0
12 Dec 2024
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
Charles Meyers
Mohammad Reza Saleh Sedghpour
Tommy Löfstedt
Erik Elmroth
OOD
AAML
33
0
0
11 Sep 2024
On the Weaknesses of Backdoor-based Model Watermarking: An
  Information-theoretic Perspective
On the Weaknesses of Backdoor-based Model Watermarking: An Information-theoretic Perspective
Aoting Hu
Yanzhi Chen
Renjie Xie
Adrian Weller
38
0
0
10 Sep 2024
ModelLock: Locking Your Model With a Spell
ModelLock: Locking Your Model With a Spell
Yifeng Gao
Yuhua Sun
Xingjun Ma
Zuxuan Wu
Yu-Gang Jiang
VLM
50
1
0
25 May 2024
A Generative Approach to Surrogate-based Black-box Attacks
A Generative Approach to Surrogate-based Black-box Attacks
Raha Moraffah
Huan Liu
AAML
27
0
0
05 Feb 2024
Stolen Subwords: Importance of Vocabularies for Machine Translation
  Model Stealing
Stolen Subwords: Importance of Vocabularies for Machine Translation Model Stealing
Vilém Zouhar
AAML
40
0
0
29 Jan 2024
Model Stealing Attack against Recommender System
Model Stealing Attack against Recommender System
Zhihao Zhu
Rui Fan
Chenwang Wu
Yi Yang
Defu Lian
Enhong Chen
AAML
27
2
0
18 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
68
3
0
20 Nov 2023
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based
  sample selection
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection
Akshit Jindal
Vikram Goyal
Saket Anand
Chetan Arora
FedML
20
2
0
08 Nov 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
33
4
0
19 Oct 2023
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model
  Extraction Attack
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack
Renyang Liu
Jinhong Zhang
Kwok-Yan Lam
Jun Zhao
Wei Zhou
25
1
0
15 Oct 2023
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Jan Dubiñski
Stanislaw Pawlak
Franziska Boenisch
Tomasz Trzciñski
Adam Dziedzic
AAML
29
3
0
12 Oct 2023
StegGuard: Fingerprinting Self-supervised Pre-trained Encoders via
  Secrets Embeder and Extractor
StegGuard: Fingerprinting Self-supervised Pre-trained Encoders via Secrets Embeder and Extractor
Xingdong Ren
Tianxing Zhang
Hanzhou Wu
Xinpeng Zhang
Yinggui Wang
Guangling Sun
LLMSV
27
0
0
05 Oct 2023
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Vlad Hondru
Radu Tudor Ionescu
DiffM
50
1
0
29 Sep 2023
Isolation and Induction: Training Robust Deep Neural Networks against
  Model Stealing Attacks
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Jun Guo
Aishan Liu
Xingyu Zheng
Siyuan Liang
Yisong Xiao
Yichao Wu
Xianglong Liu
AAML
38
12
0
02 Aug 2023
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing
  Inference Serving Systems
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems
Debopam Sanyal
Jui-Tse Hung
Manavi Agrawal
Prahlad Jasti
Shahab Nikkhoo
S. Jha
Tianhao Wang
Sibin Mohan
Alexey Tumanov
51
0
0
03 Jul 2023
The False Promise of Imitating Proprietary LLMs
The False Promise of Imitating Proprietary LLMs
Arnav Gudibande
Eric Wallace
Charles Burton Snell
Xinyang Geng
Hao Liu
Pieter Abbeel
Sergey Levine
Dawn Song
ALM
44
198
0
25 May 2023
Lion: Adversarial Distillation of Proprietary Large Language Models
Lion: Adversarial Distillation of Proprietary Large Language Models
Yuxin Jiang
Chunkit Chan
Mingyang Chen
Wei Wang
ALM
28
23
0
22 May 2023
Finding Meaningful Distributions of ML Black-boxes under Forensic
  Investigation
Finding Meaningful Distributions of ML Black-boxes under Forensic Investigation
Jiyi Zhang
Hansheng Fang
Hwee Kuan Lee
E. Chang
18
1
0
10 May 2023
On the Limitations of Model Stealing with Uncertainty Quantification
  Models
On the Limitations of Model Stealing with Uncertainty Quantification Models
David Pape
Sina Daubener
Thorsten Eisenhofer
Antonio Emanuele Cinà
Lea Schonherr
36
3
0
09 May 2023
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
Asim Waheed
Vasisht Duddu
Nadarajah Asokan
35
9
0
17 Apr 2023
On the Adversarial Inversion of Deep Biometric Representations
On the Adversarial Inversion of Deep Biometric Representations
Gioacchino Tangari
Shreesh Keskar
Hassan Jameel Asghar
Dali Kaafar
AAML
34
2
0
12 Apr 2023
Robust and IP-Protecting Vertical Federated Learning against Unexpected
  Quitting of Parties
Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties
Jingwei Sun
Zhixu Du
Anna Dai
Saleh Baghersalimi
Alireza Amirshahi
David Atienza
Yiran Chen
FedML
16
7
0
28 Mar 2023
Model Extraction Attacks on Split Federated Learning
Model Extraction Attacks on Split Federated Learning
Jingtao Li
Adnan Siraj Rakin
Xing Chen
Li Yang
Zhezhi He
Deliang Fan
C. Chakrabarti
FedML
65
5
0
13 Mar 2023
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign
  Recognition: A Feasibility Study
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study
Fabian Woitschek
G. Schneider
AAML
38
9
0
27 Feb 2023
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
  Scientific Plots
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Boyang Zhang
Xinlei He
Yun Shen
Tianhao Wang
Yang Zhang
AAML
27
2
0
23 Feb 2023
On Function-Coupled Watermarks for Deep Neural Networks
On Function-Coupled Watermarks for Deep Neural Networks
Xiangyu Wen
Yu Li
Weizhen Jiang
Qian-Lan Xu
AAML
28
1
0
08 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
21
2
0
04 Feb 2023
IronForge: An Open, Secure, Fair, Decentralized Federated Learning
IronForge: An Open, Secure, Fair, Decentralized Federated Learning
Guangsheng Yu
Xu Wang
Caijun Sun
Qin Wang
Ping Yu
Wei Ni
R. Liu
Xiwei Xu
OOD
AI4CE
29
25
0
07 Jan 2023
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David E. Evans
Boris Köpf
Andrew J. Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
47
35
0
21 Dec 2022
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
42
4
0
12 Dec 2022
Model Extraction Attack against Self-supervised Speech Models
Model Extraction Attack against Self-supervised Speech Models
Tsung-Yuan Hsu
Chen An Li
Tung-Yu Wu
Hung-yi Lee
27
1
0
29 Nov 2022
Federated Learning Attacks and Defenses: A Survey
Federated Learning Attacks and Defenses: A Survey
Yao Chen
Yijie Gui
Hong Lin
Wensheng Gan
Yongdong Wu
FedML
44
29
0
27 Nov 2022
Data-free Defense of Black Box Models Against Adversarial Attacks
Data-free Defense of Black Box Models Against Adversarial Attacks
Gaurav Kumar Nayak
Inder Khatri
Ruchit Rawal
Anirban Chakraborty
AAML
33
1
0
03 Nov 2022
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
  Neural Networks
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
Jiyang Guan
Jian Liang
Ran He
AAML
MLAU
50
29
0
21 Oct 2022
Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural
  Networks
Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks
Run Wang
Jixing Ren
Boheng Li
Tianyi She
Wenhui Zhang
Liming Fang
Jing Chen
Chao Shen
Lina Wang
WIGM
32
16
0
14 Oct 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
L. Ma
AAML
45
13
0
03 Oct 2022
Privacy Attacks Against Biometric Models with Fewer Samples:
  Incorporating the Output of Multiple Models
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Sohaib Ahmad
Benjamin Fuller
Kaleel Mahmood
AAML
27
0
0
22 Sep 2022
CATER: Intellectual Property Protection on Text Generation APIs via
  Conditional Watermarks
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Xuanli He
Qiongkai Xu
Yi Zeng
Lingjuan Lyu
Fangzhao Wu
Jiwei Li
R. Jia
WaLM
188
72
0
19 Sep 2022
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API
  Predictions
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
Lingjiao Chen
Zhihua Jin
Sabri Eyuboglu
Christopher Ré
Matei A. Zaharia
James Zou
51
9
0
18 Sep 2022
Orchestrating Collaborative Cybersecurity: A Secure Framework for
  Distributed Privacy-Preserving Threat Intelligence Sharing
Orchestrating Collaborative Cybersecurity: A Secure Framework for Distributed Privacy-Preserving Threat Intelligence Sharing
J. Troncoso-Pastoriza
Alain Mermoud
Romain Bouyé
Francesco Marino
Jean-Philippe Bossuat
Vincent Lenders
Jean-Pierre Hubaux
32
3
0
06 Sep 2022
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
  Learning
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning
Tianxing Zhang
Hanzhou Wu
Xiaofeng Lu
Guangling Sun
AAML
27
4
0
08 Aug 2022
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
44
12
0
04 Aug 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
32
4
0
21 Jul 2022
Black-box Generalization of Machine Teaching
Black-box Generalization of Machine Teaching
Xiaofeng Cao
Yaming Guo
Ivor W. Tsang
James T. Kwok
28
0
0
30 Jun 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
45
106
0
16 Jun 2022
123
Next