ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.07331
36
42

Learnability for the Information Bottleneck

17 July 2019
Tailin Wu
Ian S. Fischer
Isaac L. Chuang
Max Tegmark
ArXivPDFHTML
Abstract

The Information Bottleneck (IB) method (\cite{tishby2000information}) provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I(X;Z)−βI(Y;Z)I(X;Z)-\beta I(Y;Z)I(X;Z)−βI(Y;Z) employs a Lagrange multiplier β\betaβ to tune this trade-off. However, in practice, not only is β\betaβ chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between β\betaβ, learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if β\betaβ is improperly chosen, learning cannot happen -- the trivial representation P(Z∣X)=P(Z)P(Z|X)=P(Z)P(Z∣X)=P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β\betaβ is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β\betaβ. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β\betaβ for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST, and CIFAR10.

View on arXiv
Comments on this paper