50
0

Leveraging KANs for Expedient Training of Multichannel MLPs via Preconditioning and Geometric Refinement

Main:9 Pages
3 Figures
Bibliography:4 Pages
3 Tables
Appendix:7 Pages
Abstract

Multilayer perceptrons (MLPs) are a workhorse machine learning architecture, used in a variety of modern deep learning frameworks. However, recently Kolmogorov-Arnold Networks (KANs) have become increasingly popular due to their success on a range of problems, particularly for scientific machine learning tasks. In this paper, we exploit the relationship between KANs and multichannel MLPs to gain structural insight into how to train MLPs faster. We demonstrate the KAN basis (1) provides geometric localized support, and (2) acts as a preconditioned descent in the ReLU basis, overall resulting in expedited training and improved accuracy. Our results show the equivalence between free-knot spline KAN architectures, and a class of MLPs that are refined geometrically along the channel dimension of each weight tensor. We exploit this structural equivalence to define a hierarchical refinement scheme that dramatically accelerates training of the multi-channel MLP architecture. We show further accuracy improvements can be had by allowing the 11D locations of the spline knots to be trained simultaneously with the weights. These advances are demonstrated on a range of benchmark examples for regression and scientific machine learning.

View on arXiv
@article{actor2025_2505.18131,
  title={ Leveraging KANs for Expedient Training of Multichannel MLPs via Preconditioning and Geometric Refinement },
  author={ Jonas A. Actor and Graham Harper and Ben Southworth and Eric C. Cyr },
  journal={arXiv preprint arXiv:2505.18131},
  year={ 2025 }
}
Comments on this paper