ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07277
12
17

ITENE: Intrinsic Transfer Entropy Neural Estimator

16 December 2019
Jingjing Zhang
Osvaldo Simeone
Zoran Cvetkovic
E. Abela
M. Richardson
ArXivPDFHTML
Abstract

Quantifying the directionality of information flow is instrumental in understanding, and possibly controlling, the operation of many complex systems, such as transportation, social, neural, or gene-regulatory networks. The standard Transfer Entropy (TE) metric follows Granger's causality principle by measuring the Mutual Information (MI) between the past states of a source signal XXX and the future state of a target signal YYY while conditioning on past states of YYY. Hence, the TE quantifies the improvement, as measured by the log-loss, in the prediction of the target sequence YYY that can be accrued when, in addition to the past of YYY, one also has available past samples from XXX. However, by conditioning on the past of YYY, the TE also measures information that can be synergistically extracted by observing both the past of XXX and YYY, and not solely the past of XXX. Building on a private key agreement formulation, the Intrinsic TE (ITE) aims to discount such synergistic information to quantify the degree to which XXX is \emph{individually} predictive of YYY, independent of YYY's past. In this paper, an estimator of the ITE is proposed that is inspired by the recently proposed Mutual Information Neural Estimation (MINE). The estimator is based on variational bound on the KL divergence, two-sample neural network classifiers, and the pathwise estimator of Monte Carlo gradients.

View on arXiv
Comments on this paper