35
0

A ZeNN architecture to avoid the Gaussian trap

5 Figures
Appendix:25 Pages
Abstract

We propose a new simple architecture, Zeta Neural Networks (ZeNNs), in order to overcome several shortcomings of standard multi-layer perceptrons (MLPs). Namely, in the large width limit, MLPs are non-parametric, they do not have a well-defined pointwise limit, they lose non-Gaussian attributes and become unable to perform feature learning; moreover, finite width MLPs perform poorly in learning high frequencies. The new ZeNN architecture is inspired by three simple principles from harmonic analysis:i) Enumerate the perceptons and introduce a non-learnable weight to enforce convergence;ii) Introduce a scaling (or frequency) factor;iii) Choose activation functions that lead to near orthogonal systems.We will show that these ideas allow us to fix the referred shortcomings of MLPs. In fact, in the infinite width limit, ZeNNs converge pointwise, they exhibit a rich asymptotic structure beyond Gaussianity, and perform feature learning. Moreover, when appropriate activation functions are chosen, (finite width) ZeNNs excel at learning high-frequency features of functions with low dimensional domains.

View on arXiv
Comments on this paper