Approximating Latent Manifolds in Neural Networks via Vanishing Ideals

Deep neural networks have reshaped modern machine learning by learning powerful latent representations that often align with the manifold hypothesis: high-dimensional data lie on lower-dimensional manifolds. In this paper, we establish a connection between manifold learning and computational algebra by demonstrating how vanishing ideals can characterize the latent manifolds of deep networks. To that end, we propose a new neural architecture that (i) truncates a pretrained network at an intermediate layer, (ii) approximates each class manifold via polynomial generators of the vanishing ideal, and (iii) transforms the resulting latent space into linearly separable features through a single polynomial layer. The resulting models have significantly fewer layers than their pretrained baselines, while maintaining comparable accuracy, achieving higher throughput, and utilizing fewer parameters. Furthermore, drawing on spectral complexity analysis, we derive sharper theoretical guarantees for generalization, showing that our approach can in principle offer tighter bounds than standard deep networks. Numerical experiments confirm the effectiveness and efficiency of the proposed approach.
View on arXiv@article{pelleriti2025_2502.15051, title={ Approximating Latent Manifolds in Neural Networks via Vanishing Ideals }, author={ Nico Pelleriti and Max Zimmer and Elias Wirth and Sebastian Pokutta }, journal={arXiv preprint arXiv:2502.15051}, year={ 2025 } }