ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.11709
35
8

Bayes meets Bernstein at the Meta Level: an Analysis of Fast Rates in Meta-Learning with PAC-Bayes

23 February 2023
Charles Riou
Pierre Alquier
Badr-Eddine Chérief-Abdellatif
ArXivPDFHTML
Abstract

Bernstein's condition is a key assumption that guarantees fast rates in machine learning. For example, the Gibbs algorithm with prior π\piπ has an excess risk in O(dπ/n)O(d_{\pi}/n)O(dπ​/n), as opposed to the standard O(dπ/n)O(\sqrt{d_{\pi}/n})O(dπ​/n​), where nnn denotes the number of observations and dπd_{\pi}dπ​ is a complexity parameter which depends on the prior π\piπ. In this paper, we examine the Gibbs algorithm in the context of meta-learning, i.e., when learning the prior π\piπ from TTT tasks (with nnn observations each) generated by a meta distribution. Our main result is that Bernstein's condition always holds at the meta level, regardless of its validity at the observation level. This implies that the additional cost to learn the Gibbs prior π\piπ, which will reduce the term dπd_\pidπ​ across tasks, is in O(1/T)O(1/T)O(1/T), instead of the expected O(1/T)O(1/\sqrt{T})O(1/T​). We further illustrate how this result improves on standard rates in three different settings: discrete priors, Gaussian priors and mixture of Gaussians priors.

View on arXiv
Comments on this paper