ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18758
44
0

On the Optimality of Single-label and Multi-label Neural Network Decoders

24 March 2025
Yunus Can Gültekin
Péter Scheepers
Yuncheng Yuan
Federico Corradi
Alex Alvarado
    MQ
ArXivPDFHTML
Abstract

We investigate the design of two neural network (NN) architectures recently proposed as decoders for forward error correction: the so-called single-label NN (SLNN) and multi-label NN (MLNN) decoders. These decoders have been reported to achieve near-optimal codeword- and bit-wise performance, respectively. Results in the literature show near-optimality for a variety of short codes. In this paper, we analytically prove that certain SLNN and MLNN architectures can, in fact, always realize optimal decoding, regardless of the code. These optimal architectures and their binary weights are shown to be defined by the codebook, i.e., no training or network optimization is required. Our proposed architectures are in fact not NNs, but a different way of implementing the maximum likelihood decoding rule. Optimal performance is numerically demonstrated for Hamming (7,4)(7,4)(7,4), Polar (16,8)(16,8)(16,8), and BCH (31,21)(31,21)(31,21) codes. The results show that our optimal architectures are less complex than the SLNN and MLNN architectures proposed in the literature, which in fact only achieve near-optimal performance. Extension to longer codes is still hindered by the curse of dimensionality. Therefore, even though SLNN and MLNN can perform maximum likelihood decoding, such architectures cannot be used for medium and long codes.

View on arXiv
@article{gültekin2025_2503.18758,
  title={ On the Optimality of Single-label and Multi-label Neural Network Decoders },
  author={ Yunus Can Gültekin and Péter Scheepers and Yuncheng Yuan and Federico Corradi and Alex Alvarado },
  journal={arXiv preprint arXiv:2503.18758},
  year={ 2025 }
}
Comments on this paper