ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.10406
  4. Cited By
Open DNN Box by Power Side-Channel Attack

Open DNN Box by Power Side-Channel Attack

21 July 2019
Yun Xiang
Zhuangzhi Chen
Zuohui Chen
Zebin Fang
Haiyang Hao
Jinyin Chen
Yi Liu
Zhefu Wu
Qi Xuan
Xiaoniu Yang
    AAML
ArXivPDFHTML

Papers citing "Open DNN Box by Power Side-Channel Attack"

19 / 19 papers shown
Title
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments
Kaixiang Zhao
Lincan Li
Kaize Ding
Neil Zhenqiang Gong
Yue Zhao
Yushun Dong
AAML
52
0
0
22 Feb 2025
Revealing CNN Architectures via Side-Channel Analysis in Dataflow-based Inference Accelerators
Revealing CNN Architectures via Side-Channel Analysis in Dataflow-based Inference Accelerators
Hansika Weerasena
Prabhat Mishra
FedML
51
4
0
01 Nov 2023
A Practical Introduction to Side-Channel Extraction of Deep Neural
  Network Parameters
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters
Raphael Joud
Pierre-Alain Moëllic
S. Pontié
J. Rigaud
AAML
MIACV
MLAU
27
13
0
10 Nov 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
Lei Ma
AAML
45
13
0
03 Oct 2022
Side-channel attack analysis on in-memory computing architectures
Side-channel attack analysis on in-memory computing architectures
Ziyu Wang
Fanruo Meng
Yongmo Park
Jason K. Eshraghian
Wei D. Lu
26
21
0
06 Sep 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
57
106
0
16 Jun 2022
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
B. Ghavami
Seyd Movi
Zhenman Fang
Lesley Shannon
AAML
40
9
0
25 Dec 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
M. M. Real
Ruben Salvador
AAML
23
31
0
21 Oct 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
33
31
0
01 Sep 2021
Power-Based Attacks on Spatial DNN Accelerators
Power-Based Attacks on Spatial DNN Accelerators
Ge Li
Mohit Tiwari
Michael Orshansky
38
8
0
28 Aug 2021
An Overview of Laser Injection against Embedded Neural Network Models
An Overview of Laser Injection against Embedded Neural Network Models
Mathieu Dumont
Pierre-Alain Moëllic
R. Viera
J. Dutertre
Rémi Bernhard
AAML
30
9
0
04 May 2021
A Review of Confidentiality Threats Against Embedded Neural Network
  Models
A Review of Confidentiality Threats Against Embedded Neural Network Models
Raphael Joud
Pierre-Alain Moëllic
Rémi Bernhard
J. Rigaud
28
6
0
04 May 2021
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
39
99
0
23 Jun 2020
BoMaNet: Boolean Masking of an Entire Neural Network
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
25
45
0
16 Jun 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
338
5,849
0
08 Jul 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,640
0
03 Jul 2012
1