ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.01838
  4. Cited By
High Accuracy and High Fidelity Extraction of Neural Networks

High Accuracy and High Fidelity Extraction of Neural Networks

3 September 2019
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
    MLAU
    MIACV
ArXivPDFHTML

Papers citing "High Accuracy and High Fidelity Extraction of Neural Networks"

50 / 101 papers shown
Title
Margin-distancing for safe model explanation
Margin-distancing for safe model explanation
Tom Yan
Chicheng Zhang
28
3
0
23 Feb 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial
  Perturbations
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAML
FedML
31
66
0
17 Feb 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
30
30
0
10 Feb 2022
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
Chi Hong
Jiyue Huang
L. Chen
27
2
0
31 Jan 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
  Encoders
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
23
53
0
27 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
16
24
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
24
60
0
15 Dec 2021
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
Inference Attacks Against Graph Neural Networks
Inference Attacks Against Graph Neural Networks
Zhikun Zhang
Min Chen
Michael Backes
Yun Shen
Yang Zhang
MIACV
AAML
GNN
33
50
0
06 Oct 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
40
16
0
20 Sep 2021
Membership Inference Attacks Against Recommender Systems
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Zhaochun Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
26
83
0
16 Sep 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
33
31
0
01 Sep 2021
Power-Based Attacks on Spatial DNN Accelerators
Power-Based Attacks on Spatial DNN Accelerators
Ge Li
Mohit Tiwari
Michael Orshansky
38
8
0
28 Aug 2021
SoK: How Robust is Image Classification Deep Neural Network
  Watermarking? (Extended Version)
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Nils Lukas
Edward Jiang
Xinda Li
Florian Kerschbaum
AAML
36
87
0
11 Aug 2021
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on
  Commercial EdgeML Device
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device
Yoo-Seung Won
Soham Chatterjee
Dirmanto Jap
A. Basu
S. Bhasin
AAML
FedML
25
13
0
03 Aug 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based
  Explainable AI
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
24
37
0
19 Jul 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
23
71
0
04 Jul 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
25
7
0
21 Jun 2021
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be
  Secretly Coded into the Classifiers' Outputs
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs
Mohammad Malekzadeh
Anastasia Borovykh
Deniz Gündüz
MIACV
26
42
0
25 May 2021
A Review of Confidentiality Threats Against Embedded Neural Network
  Models
A Review of Confidentiality Threats Against Embedded Neural Network Models
Raphael Joud
Pierre-Alain Moëllic
Rémi Bernhard
J. Rigaud
28
6
0
04 May 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing
  Attack
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
25
99
0
09 Mar 2021
Membership Inference Attacks are Easier on Difficult Problems
Membership Inference Attacks are Easier on Difficult Problems
Avital Shafran
Shmuel Peleg
Yedid Hoshen
MIACV
22
16
0
15 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
Model Extraction and Defenses on Generative Adversarial Networks
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILM
MIACV
31
14
0
06 Jan 2021
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
Amnesiac Machine Learning
Amnesiac Machine Learning
Laura Graves
Vineel Nagisetty
Vijay Ganesh
MU
MIACV
27
246
0
21 Oct 2020
A Systematic Review on Model Watermarking for Neural Networks
A Systematic Review on Model Watermarking for Neural Networks
Franziska Boenisch
AAML
11
64
0
25 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
32
128
0
13 Jul 2020
Improving LIME Robustness with Smarter Locality Sampling
Improving LIME Robustness with Smarter Locality Sampling
Sean Saito
Eugene Chua
Nicholas Capel
Rocco Hu
FAtt
AAML
11
22
0
22 Jun 2020
BoMaNet: Boolean Masking of an Entire Neural Network
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
25
45
0
16 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
24
5
0
12 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
31
218
0
05 May 2020
Entangled Watermarks as a Defense against Model Extraction
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
13
218
0
27 Feb 2020
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
17
53
0
23 Feb 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine
  Learning Model Protection in Mobile Apps
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
39
78
0
18 Feb 2020
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
144
0
02 Dec 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
37
68
0
06 Nov 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
33
45
0
11 Oct 2019
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
249
1,842
0
03 Feb 2017
Previous
123
Next