Expressive power of binary and ternary neural networks

Abstract
We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate -H\"older functions on . Also, for any interval , continuous functions on can be approximated by networks of depth with binary activation function .
View on arXivComments on this paper