Activation Functions Not To Active: A Plausible Theory on Interpreting Neural Networks

Researchers commonly believe that neural networks model a high-dimensional space but cannot give a clear definition of this space. What is this space? What is its dimension? And does it has finite dimensions? In this paper, we develop a plausible theory on interpreting neural networks in terms of the role of activation functions in neural networks and define a high-dimensional (more precisely, an infinite-dimensional) space that neural networks including deep-learning networks could create. We show that the activation function acts as a magnifying function that maps the low-dimensional linear space into an infinite-dimensional space, which can distinctly identify the polynomial approximation of any multivariate continuous function of the variable values being the same features of the given dataset. Given a dataset with each example of features , , , , we believe that neural networks model a special space with infinite dimensions, each of which is a monomial \prod_{i_1, i_2, \cdots, i_d} f_1^{i_1} f_2^{i_2} \cdots f_d^{i_d} for some non-negative integers . We term such an infinite-dimensional space a . We see such a dimension as the minimum information unit. Every neuron node previously through an activation layer in neural networks is a , which is actually a polynomial of infinite degree. This is something like a coordinate system, in which every multivalue function can be represented by a . We also show that training NNs could at least be reduced to solving a system of nonlinear equations. %solve sets of nonlinear equations
View on arXiv