ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.22749
37
0

Understanding Aggregations of Proper Learners in Multiclass Classification

30 October 2024
Julian Asilis
Mikael Møller Høgsgaard
Grigoris Velegkas
ArXivPDFHTML
Abstract

Multiclass learnability is known to exhibit a properness barrier: there are learnable classes which cannot be learned by any proper learner. Binary classification faces no such barrier for learnability, but a similar one for optimal learning, which can in general only be achieved by improper learners. Fortunately, recent advances in binary classification have demonstrated that this requirement can be satisfied using aggregations of proper learners, some of which are strikingly simple. This raises a natural question: to what extent can simple aggregations of proper learners overcome the properness barrier in multiclass classification? We give a positive answer to this question for classes which have finite Graph dimension, dGd_GdG​. Namely, we demonstrate that the optimal binary learners of Hanneke, Larsen, and Aden-Ali et al. (appropriately generalized to the multiclass setting) achieve sample complexity O(dG+ln⁡(1/δ)ϵ)O\left(\frac{d_G + \ln(1 / \delta)}{\epsilon}\right)O(ϵdG​+ln(1/δ)​). This forms a strict improvement upon the sample complexity of ERM. We complement this with a lower bound demonstrating that for certain classes of Graph dimension dGd_GdG​, majorities of ERM learners require Ω(dG+ln⁡(1/δ)ϵ)\Omega \left( \frac{d_G + \ln(1 / \delta)}{\epsilon}\right)Ω(ϵdG​+ln(1/δ)​) samples. Furthermore, we show that a single ERM requires Ω(dGln⁡(1/ϵ)+ln⁡(1/δ)ϵ)\Omega \left(\frac{d_G \ln(1 / \epsilon) + \ln(1 / \delta)}{\epsilon}\right)Ω(ϵdG​ln(1/ϵ)+ln(1/δ)​) samples on such classes, exceeding the lower bound of Daniely et al. (2015) by a factor of ln⁡(1/ϵ)\ln(1 / \epsilon)ln(1/ϵ). For multiclass learning in full generality -- i.e., for classes of finite DS dimension but possibly infinite Graph dimension -- we give a strong refutation to these learning strategies, by exhibiting a learnable class which cannot be learned to constant error by any aggregation of a finite number of proper learners.

View on arXiv
Comments on this paper