ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.13182
11
8

MIND: Inductive Mutual Information Estimation, A Convex Maximum-Entropy Copula Approach

25 February 2021
Yves-Laurent Kom Samo
ArXivPDFHTML
Abstract

We propose a novel estimator of the mutual information between two ordinal vectors xxx and yyy. Our approach is inductive (as opposed to deductive) in that it depends on the data generating distribution solely through some nonparametric properties revealing associations in the data, and does not require having enough data to fully characterize the true joint distributions Px,yP_{x, y}Px,y​. Specifically, our approach consists of (i) noting that I(y;x)=I(uy;ux)I\left(y; x\right) = I\left(u_y; u_x\right)I(y;x)=I(uy​;ux​) where uyu_yuy​ and uxu_xux​ are the copula-uniform dual representations of yyy and xxx (i.e. their images under the probability integral transform), and (ii) estimating the copula entropies h(uy)h\left(u_y\right)h(uy​), h(ux)h\left(u_x\right)h(ux​) and h(uy,ux)h\left(u_y, u_x\right)h(uy​,ux​) by solving a maximum-entropy problem over the space of copula densities under a constraint of the type αm=E[ϕm(uy,ux)]\alpha_m = E\left[\phi_m(u_y, u_x)\right]αm​=E[ϕm​(uy​,ux​)]. We prove that, so long as the constraint is feasible, this problem admits a unique solution, it is in the exponential family, and it can be learned by solving a convex optimization problem. The resulting estimator, which we denote MIND, is marginal-invariant, always non-negative, unbounded for any sample size nnn, consistent, has MSE rate O(1/n)O(1/n)O(1/n), and is more data-efficient than competing approaches. Beyond mutual information estimation, we illustrate that our approach may be used to mitigate mode collapse in GANs by maximizing the entropy of the copula of fake samples, a model we refer to as Copula Entropy Regularized GAN (CER-GAN).

View on arXiv
Comments on this paper