40
0

Parallel Layer Normalization for Universal Approximation

Abstract

Universal approximation theorem (UAT) is a fundamental theory for deep neural networks (DNNs), demonstrating their powerful representation capacity to represent and approximate any function. The analyses and proofs of UAT are based on traditional network with only linear and nonlinear activation functions, but omitting normalization layers, which are commonly employed to enhance the training of modern networks. This paper conducts research on UAT of DNNs with normalization layers for the first time. We theoretically prove that an infinitely wide network -- composed solely of parallel layer normalization (PLN) and linear layers -- has universal approximation capacity. Additionally, we investigate the minimum number of neurons required to approximate LL-Lipchitz continuous functions, with a single hidden-layer network. We compare the approximation capacity of PLN with traditional activation functions in theory. Different from the traditional activation functions, we identify that PLN can act as both activation function and normalization in deep neural networks at the same time. We also find that PLN can improve the performance when replacing LN in transformer architectures, which reveals the potential of PLN used in neural architectures.

View on arXiv
@article{ni2025_2505.13142,
  title={ Parallel Layer Normalization for Universal Approximation },
  author={ Yunhao Ni and Yuhe Liu and Wenxin Sun and Yitong Tang and Yuxin Guo and Peilin Feng and Wenjun Wu and Lei Huang },
  journal={arXiv preprint arXiv:2505.13142},
  year={ 2025 }
}
Comments on this paper