ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.13326
18
0

A Tight Lower Bound for Uniformly Stable Algorithms

24 December 2020
Qinghua Liu
Zhou Lu
ArXivPDFHTML
Abstract

Leveraging algorithmic stability to derive sharp generalization bounds is a classic and powerful approach in learning theory. Since Vapnik and Chervonenkis [1974] first formalized the idea for analyzing SVMs, it has been utilized to study many fundamental learning algorithms (e.g., kkk-nearest neighbors [Rogers and Wagner, 1978], stochastic gradient method [Hardt et al., 2016], linear regression [Maurer, 2017], etc). In a recent line of great works by Feldman and Vondrak [2018, 2019] as well as Bousquet et al. [2020b], they prove a high probability generalization upper bound of order O~(γ+Ln)\tilde{\mathcal{O}}(\gamma +\frac{L}{\sqrt{n}})O~(γ+n​L​) for any uniformly γ\gammaγ-stable algorithm and LLL-bounded loss function. Although much progress was achieved in proving generalization upper bounds for stable algorithms, our knowledge of lower bounds is rather limited. In fact, there is no nontrivial lower bound known ever since the study of uniform stability [Bousquet and Elisseeff, 2002], to the best of our knowledge. In this paper we fill the gap by proving a tight generalization lower bound of order Ω(γ+Ln)\Omega(\gamma+\frac{L}{\sqrt{n}})Ω(γ+n​L​), which matches the best known upper bound up to logarithmic factors

View on arXiv
Comments on this paper