Approximation of functions with one-bit neural networks

The celebrated universal approximation theorems for neural networks roughly state that any reasonable function can be arbitrarily well-approximated by a network whose parameters are appropriately chosen real numbers. This paper examines the approximation capabilities of one-bit neural networks -- those whose nonzero parameters are for some fixed . One of our main theorems shows that for any with and error , there is a such that for all away from the boundary of , and is either implementable by a quadratic network with parameters or a ReLU network with parameters, as . We establish new approximation results for iterated multivariate Bernstein operators, error estimates for noise-shaping quantization on the Bernstein basis, and novel implementation of the Bernstein polynomials by one-bit quadratic and ReLU neural networks.
View on arXiv