ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12892
36
7

Quantifying and maximizing the information flux in recurrent neural networks

30 January 2023
C. Metzner
Marius E. Yamakou
Dennis Voelkl
A. Schilling
P. Krauss
ArXivPDFHTML
Abstract

Free-running Recurrent Neural Networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x⃗(t),x⃗(t ⁣+ ⁣1)]I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]I[x(t),x(t+1)] between subsequent system states x⃗\vec{x}x. Although, former studies have shown that III depends on the statistics of the network's connection weights, it is unclear (1) how to maximize III systematically and (2) how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information III is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron-pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x⃗(t),x⃗(t ⁣+ ⁣1)]I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]I[x(t),x(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.

View on arXiv
Comments on this paper