17
0

A Minimum Description Length Approach to Regularization in Neural Networks

Abstract

State-of-the-art neural networks can be trained to become remarkable solutions to many problems. But while these architectures can express symbolic, perfect solutions, trained models often arrive at approximations instead. We show that the choice of regularization method plays a crucial role: when trained on formal languages with standard regularization (L1L_1, L2L_2, or none), expressive architectures not only fail to converge to correct solutions but are actively pushed away from perfect initializations. In contrast, applying the Minimum Description Length (MDL) principle to balance model complexity with data fit provides a theoretically grounded regularization method. Using MDL, perfect solutions are selected over approximations, independently of the optimization algorithm. We propose that unlike existing regularization techniques, MDL introduces the appropriate inductive bias to effectively counteract overfitting and promote generalization.

View on arXiv
@article{abudy2025_2505.13398,
  title={ A Minimum Description Length Approach to Regularization in Neural Networks },
  author={ Matan Abudy and Orr Well and Emmanuel Chemla and Roni Katzir and Nur Lan },
  journal={arXiv preprint arXiv:2505.13398},
  year={ 2025 }
}
Comments on this paper