11
0

Expressive power of binary and ternary neural networks

Abstract

We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate β\beta-H\"older functions on [0,1]d[0,1]^d. Also, for any interval [a,b)R[a,b)\subset\mathbb{R}, continuous functions on [0,1]d[0,1]^d can be approximated by networks of depth 22 with binary activation function \mathds1[a,b)\mathds{1}_{[a,b)}.

View on arXiv
Comments on this paper