ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12032
  4. Cited By
Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural
  Networks

Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

28 May 2019
Pu Zhao
Siyue Wang
Cheng Gongye
Yanzhi Wang
Yunsi Fei
Xinyu Lin
    AAML
ArXivPDFHTML

Papers citing "Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks"

18 / 18 papers shown
Title
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing
  System
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System
Hao Lv
Bing Li
Lefei Zhang
Cheng Liu
Ying Wang
AAML
22
3
0
20 Feb 2023
An Incremental Gray-box Physical Adversarial Attack on Neural Network
  Training
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training
Rabiah Al-qudah
Moayad Aloqaily
B. Ouni
Mohsen Guizani
T. Lestable
AAML
38
4
0
20 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
49
21
0
19 Feb 2023
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
35
4
0
20 Oct 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
B. Ghavami
Seyd Movi
Zhenman Fang
Lesley Shannon
AAML
40
9
0
25 Dec 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
Attacking Deep Learning AI Hardware with Universal Adversarial
  Perturbation
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
Mehdi Sadi
B. M. S. Bahar Talukder
Kaniz Mishty
Md. Tauhidur Rahman
AAML
37
0
0
18 Nov 2021
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based
  Repeated Bit Flip Attack
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Dahoon Park
K. Kwon
Sunghoon Im
Jaeha Kung
AAML
16
3
0
01 Nov 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
41
8
0
23 Sep 2021
Non-Singular Adversarial Robustness of Neural Networks
Non-Singular Adversarial Robustness of Neural Networks
Yu-Lin Tsai
Chia-Yi Hsu
Chia-Mu Yu
Pin-Yu Chen
AAML
OOD
24
5
0
23 Feb 2021
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Jinyin Chen
Longyuan Zhang
Haibin Zheng
Xueke Wang
Zhaoyan Ming
AAML
39
19
0
06 Jan 2021
Artificial Neural Networks and Fault Injection Attacks
Artificial Neural Networks and Fault Injection Attacks
Shahin Tajik
F. Ganji
SILM
13
10
0
17 Aug 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
64
185
0
30 Apr 2020
DeepHammer: Depleting the Intelligence of Deep Neural Networks through
  Targeted Chain of Bit Flips
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Fan Yao
Adnan Siraj Rakin
Deliang Fan
AAML
18
155
0
30 Mar 2020
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
16
42
0
20 Aug 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
32
54
0
26 Jul 2019
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
312
3,115
0
04 Nov 2016
1