Better Neural Network Expressivity: Subdividing the Simplex

This work studies the expressivity of ReLU neural networks with a focus on their depth. A sequence of previous works showed that hidden layers are sufficient to compute all continuous piecewise linear (CPWL) functions on . Hertrich, Basu, Di Summa, and Skutella (NeurIPS'21) conjectured that this result is optimal in the sense that there are CPWL functions on , like the maximum function, that require this depth. We disprove the conjecture and show that hidden layers are sufficient to compute all CPWL functions on .A key step in the proof is that ReLU neural networks with two hidden layers can exactly represent the maximum function of five inputs. More generally, we show that hidden layers are sufficient to compute the maximum of numbers. Our constructions almost match the lower bound of Averkov, Hojny, and Merkert (ICLR'25) in the special case of ReLU networks with weights that are decimal fractions. The constructions have a geometric interpretation via polyhedral subdivisions of the simplex into ``easier'' polytopes.
View on arXiv@article{bakaev2025_2505.14338, title={ Better Neural Network Expressivity: Subdividing the Simplex }, author={ Egor Bakaev and Florestan Brunck and Christoph Hertrich and Jack Stade and Amir Yehudayoff }, journal={arXiv preprint arXiv:2505.14338}, year={ 2025 } }