ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.13891
40
1

DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks

22 May 2024
Patrik Velcický
J. Breier
Mladen Kovacevic
Xiaolu Hou
    AAML
ArXivPDFHTML
Abstract

Fault injection attacks are a potent threat against embedded implementations of neural network models. Several attack vectors have been proposed, such as misclassification, model extraction, and trojan/backdoor planting. Most of these attacks work by flipping bits in the memory where quantized model parameters are stored. In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode. We experimentally evaluate our proposal with several publicly available models and datasets, by using state-of-the-art bit-flip attacks: BFA, T-BFA, and TA-LBF. Our results show an increase in protection margin of up to 7.6×7.6\times7.6× for 4−4-4−bit and 12.4×12.4\times12.4× for 8−8-8−bit quantized networks. Memory overheads start at 50%50\%50% of the original network size, while the time overheads are negligible. Moreover, DeepNcode does not require retraining and does not change the original accuracy of the model.

View on arXiv
Comments on this paper