ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06578
24
0

Compact and Efficient Neural Networks for Image Recognition Based on Learned 2D Separable Transform

10 May 2025
Maxim Vashkevich
Egor Krivalcevich
ArXivPDFHTML
Abstract

The paper presents a learned two-dimensional separable transform (LST) that can be considered as a new type of computational layer for constructing neural network (NN) architecture for image recognition tasks. The LST based on the idea of sharing the weights of one fullyconnected (FC) layer to process all rows of an image. After that, a second shared FC layer is used to process all columns of image representation obtained from the first layer. The use of LST layers in a NN architecture significantly reduces the number of model parameters compared to models that use stacked FC layers. We show that a NN-classifier based on a single LST layer followed by an FC layer achieves 98.02\% accuracy on the MNIST dataset, while having only 9.5k parameters. We also implemented a LST-based classifier for handwritten digit recognition on the FPGA platform to demonstrate the efficiency of the suggested approach for designing a compact and high-performance implementation of NN models. Git repository with supplementary materials:this https URL

View on arXiv
@article{vashkevich2025_2505.06578,
  title={ Compact and Efficient Neural Networks for Image Recognition Based on Learned 2D Separable Transform },
  author={ Maxim Vashkevich and Egor Krivalcevich },
  journal={arXiv preprint arXiv:2505.06578},
  year={ 2025 }
}
Comments on this paper