Geometric Properties and Graph-Based Optimization of Neural Networks: Addressing Non-Linearity, Dimensionality, and Scalability
Deep learning models are often considered black boxes due to their complex hierarchical transformations. Identifying suitable architectures is crucial for maximizing predictive performance with limited data. Understanding the geometric properties of neural networks involves analyzing their structure, activation functions, and the transformations they perform in high-dimensional space. These properties influence learning, representation, and decision-making. This research explores neural networks through geometric metrics and graph structures, building upon foundational work inarXiv:2007.06559. It addresses the limited understanding of geometric structures governing neural networks, particularly the data manifolds they operate on, which impact classification, optimization, and representation. We identify three key challenges: (1) overcoming linear separability limitations, (2) managing the dimensionality-complexity trade-off, and (3) improving scalability through graph representations. To address these, we propose leveraging non-linear activation functions, optimizing network complexity via pruning and transfer learning, and developing efficient graph-based models. Our findings contribute to a deeper understanding of neural network geometry, supporting the development of more robust, scalable, and interpretable models.
View on arXiv@article{wienczkowski2025_2503.05761, title={ Geometric Properties and Graph-Based Optimization of Neural Networks: Addressing Non-Linearity, Dimensionality, and Scalability }, author={ Michael Wienczkowski and Addisu Desta and Paschal Ugochukwu }, journal={arXiv preprint arXiv:2503.05761}, year={ 2025 } }