ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.03211
13
7
v1v2v3v4v5v6v7v8 (latest)

Why do networks have inhibitory/negative connections?

5 August 2022
Qingyang Wang
Michael A. Powell
Ali Geisa
Eric W. Bridgeford
Joshua T. Vogelstein
ArXiv (abs)PDFHTML
Abstract

Why do brains have inhibitory connections? Neuroscientists may answer: to balance excitatory connections, to memorize, to decide, to avoid constant seizure, and many more. There seem to be many function-specific stories for the necessity of inhibitory connections. However, in its most general form, there lacks a theoretical result on why brains have inhibitory connections. Leveraging deep neural networks (DNNs), a well-established model for the brain, we ask: why do networks have negative weights? Our answer is: to learn more functions. We prove that, in the absence of negative weights, neural networks are \textit{not} universal approximators. Further, we provide insights on the geometric properties of the representation space that non-negative DNNs cannot represent. While this may be an intuitive result, to the best of our knowledge, there is no formal theory, in neither machine learning nor neuroscience literature, that demonstrates \textit{why} negative weights are crucial in the context of representation capacity. Our result provides the first theoretical justification on why inhibitory connections in brains and negative weights in DNNs are important for networks to represent all functions.

View on arXiv
Comments on this paper