ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17403
47
0

Coding for Computation: Efficient Compression of Neural Networks for Reconfigurable Hardware

24 April 2025
Hans Rosenberger
Rodrigo Fischer
Johanna S. Fröhlich
Ali Bereyhi
R. Muller
ArXiv (abs)PDFHTML
Abstract

As state of the art neural networks (NNs) continue to grow in size, their resource-efficient implementation becomes ever more important. In this paper, we introduce a compression scheme that reduces the number of computations required for NN inference on reconfigurable hardware such as FPGAs. This is achieved by combining pruning via regularized training, weight sharing and linear computation coding (LCC). Contrary to common NN compression techniques, where the objective is to reduce the memory used for storing the weights of the NNs, our approach is optimized to reduce the number of additions required for inference in a hardware-friendly manner. The proposed scheme achieves competitive performance for simple multilayer perceptrons, as well as for large scale deep NNs such as ResNet-34.

View on arXiv
@article{rosenberger2025_2504.17403,
  title={ Coding for Computation: Efficient Compression of Neural Networks for Reconfigurable Hardware },
  author={ Hans Rosenberger and Rodrigo Fischer and Johanna S. Fröhlich and Ali Bereyhi and Ralf R. Müller },
  journal={arXiv preprint arXiv:2504.17403},
  year={ 2025 }
}
Comments on this paper