ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.00483
70
9
v1v2 (latest)

Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

2 October 2022
Gholamali Aminian
Saeed Masiha
Laura Toni
M. Rodrigues
ArXiv (abs)PDFHTML
Abstract

Generalization error boundaries are essential for comprehending how well machine learning models work. In this work, we suggest a creative method, i.e., the Auxiliary Distribution Method, that derives new upper bounds on generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the generalized α\alphaα-Jensen-Shannon, α\alphaα-R\'enyi (0<α<10< \alpha < 10<α<1) information between random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on generalized α\alphaα-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distributional mismatch is modeled as α\alphaα-Jensen-Shannon or α\alphaα-R\'enyi (0<α<10< \alpha < 10<α<1) between the distribution of test and training data samples. We also outline the circumstances in which our proposed upper bounds might be tighter than other earlier upper bounds.

View on arXiv
Comments on this paper