ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1106.6024
88
64

The Rate of Convergence of AdaBoost

29 June 2011
Indraneel Mukherjee
Cynthia Rudin
Robert E. Schapire
ArXivPDFHTML
Abstract

The AdaBoost algorithm was designed to combine many "weak" hypotheses that perform slightly better than random guessing into a "strong" hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss." Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that at iteration ttt, the exponential loss of AdaBoost's computed parameter vector will be at most ϵ\epsilonϵ more than that of any parameter vector of ℓ1\ell_1ℓ1​-norm bounded by BBB in a number of rounds that is at most a polynomial in BBB and 1/ϵ1/\epsilon1/ϵ. We also provide lower bounds showing that a polynomial dependence on these parameters is necessary. Our second result is that within C/ϵC/\epsilonC/ϵ iterations, AdaBoost achieves a value of the exponential loss that is at most ϵ\epsilonϵ more than the best possible value, where CCC depends on the dataset. We show that this dependence of the rate on ϵ\epsilonϵ is optimal up to constant factors, i.e., at least Ω(1/ϵ)\Omega(1/\epsilon)Ω(1/ϵ) rounds are necessary to achieve within ϵ\epsilonϵ of the optimal exponential loss.

View on arXiv
Comments on this paper