13
7

Why do networks have inhibitory/negative connections?

Abstract

Why do brains have inhibitory connections? Neuroscientists may answer: to balance excitatory connections, to memorize, to decide, to avoid constant seizure, and many more. There seem to be many function-specific stories for the necessity of inhibitory connections. However, in its most general form, there lacks a theoretical result on why brains have inhibitory connections. Leveraging deep neural networks (DNNs), a well-established model for the brain, we ask: why do networks have negative weights? Our answer is: to learn more functions. We prove that, in the absence of negative weights, neural networks are not universal approximators. Further, we provide insights on the geometric properties of the representation space that non-negative DNNs cannot represent. While this may be an intuitive result, to the best of our knowledge, there is no formal theory, in neither machine learning nor neuroscience literature, that demonstrates why negative weights are crucial in the context of representation capacity. Our result provides the first theoretical justification on why inhibitory connections in brains and negative weights in DNNs are important for networks to represent all functions.

View on arXiv
Comments on this paper