ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.02766
  4. Cited By
Knockoff Nets: Stealing Functionality of Black-Box Models

Knockoff Nets: Stealing Functionality of Black-Box Models

6 December 2018
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
    MLAU
ArXivPDFHTML

Papers citing "Knockoff Nets: Stealing Functionality of Black-Box Models"

7 / 107 papers shown
Title
Model Weight Theft With Just Noise Inputs: The Curious Case of the
  Petulant Attacker
Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker
Nicholas Roberts
Vinay Uday Prabhu
Matthew McAteer
8
18
0
19 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
31
144
0
02 Dec 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
34
68
0
06 Nov 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
30
45
0
11 Oct 2019
Universal Adversarial Audio Perturbations
Universal Adversarial Audio Perturbations
Sajjad Abdoli
L. G. Hafemann
Jérôme Rony
Ismail Ben Ayed
P. Cardinal
Alessandro Lameiras Koerich
AAML
25
51
0
08 Aug 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing
  Attacks
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
11
164
0
26 Jun 2019
Gradient-Leaks: Understanding and Controlling Deanonymization in
  Federated Learning
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy
Seong Joon Oh
Yang Zhang
Bernt Schiele
Mario Fritz
PICV
FedML
359
37
0
15 May 2018
Previous
123