ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.06511
16
13

Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness

12 October 2022
Fredrik Hellström
G. Durisi
ArXivPDFHTML
Abstract

Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity measures for conventional learning (Harutyunyan et al., 2021, Haghifam et al., 2021). Hence, it provides a unified method for establishing generalization bounds. In meta learning, there has so far been a divide between information-theoretic results and results from classical learning theory. In this work, we take a first step toward bridging this divide. Specifically, we present novel generalization bounds for meta learning in terms of the evaluated CMI (e-CMI). To demonstrate the expressiveness of the e-CMI framework, we apply our bounds to a representation learning setting, with nnn samples from n^\hat nn^ tasks parameterized by functions of the form fi∘hf_i \circ hfi​∘h. Here, each fi∈Ff_i \in \mathcal Ffi​∈F is a task-specific function, and h∈Hh \in \mathcal Hh∈H is the shared representation. For this setup, we show that the e-CMI framework yields a bound that scales as C(H)/(nn^)+C(F)/n\sqrt{ \mathcal C(\mathcal H)/(n\hat n) + \mathcal C(\mathcal F)/n} C(H)/(nn^)+C(F)/n​, where C(⋅)\mathcal C(\cdot)C(⋅) denotes a complexity measure of the hypothesis class. This scaling behavior coincides with the one reported in Tripuraneni et al. (2020) using Gaussian complexity.

View on arXiv
Comments on this paper