13
7

Why Do Networks Need Negative Weights?

Abstract

Why do networks have negative weights at all? The answer is: to learn more functions. We mathematically prove that deep neural networks with all non-negative weights are not universal approximators. This fundamental result is assumed by much of the deep learning literature without previously proving the result and demonstrating its necessity.

View on arXiv
Comments on this paper