Nonlinearity Enhanced Adaptive Activation Functions

Abstract
A general procedure for introducing parametric, learned, nonlinearity into activation functions is found to enhance the accuracy of representative neural networks without requiring significant additional computational resources. Examples are given based on the standard rectified linear unit (ReLU) as well as several other frequently employed activation functions. The associated accuracy improvement is quantified both in the context of the MNIST digit data set and a convolutional neural network (CNN) benchmark example.
View on arXiv@article{yevick2025_2403.19896, title={ Nonlinearity Enhanced Adaptive Activation Functions }, author={ David Yevick }, journal={arXiv preprint arXiv:2403.19896}, year={ 2025 } }
Comments on this paper