ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.00798
26
0

Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks

1 January 2025
Leonard Puškáč
Marek Benovič
J. Breier
Xiaolu Hou
    FedML
    AAML
ArXivPDFHTML
Abstract

Neural network models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are already available countermeasure methods currently used for protecting cryptographic implementations that can be tailored to protect embedded neural network models. Shuffling, a hiding-based countermeasure that randomly shuffles the order of computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm is used. In this paper, we propose a design of an SCA-secure version of the Fisher-Yates algorithm. By integrating the masking technique for modular reduction and Blakely's method for modular multiplication, we effectively remove the vulnerability in the division operation that led to side-channel leakage in the original version of the algorithm. We experimentally evaluate that the countermeasure is effective against SCA by implementing a correlation power analysis attack on an embedded neural network model implemented on ARM Cortex-M4. Compared to the original proposal, the memory overhead is 2×2\times2× the biggest layer of the network, while the time overhead varies from 4%4\%4% to 0.49%0.49\%0.49% for a layer with 100100100 and 100010001000 neurons, respectively.

View on arXiv
@article{puškáč2025_2501.00798,
  title={ Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks },
  author={ Leonard Puškáč and Marek Benovič and Jakub Breier and Xiaolu Hou },
  journal={arXiv preprint arXiv:2501.00798},
  year={ 2025 }
}
Comments on this paper