ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.08851
19
1

Classification Under Ambiguity: When Is Average-K Better Than Top-K?

16 December 2021
Titouan Lorieul
Alexis Joly
Dennis Shasha
ArXivPDFHTML
Abstract

When many labels are possible, choosing a single one can lead to low precision. A common alternative, referred to as top-KKK classification, is to choose some number KKK (commonly around 5) and to return the KKK labels with the highest scores. Unfortunately, for unambiguous cases, K>1K>1K>1 is too many and, for very ambiguous cases, K≤5K \leq 5K≤5 (for example) can be too small. An alternative sensible strategy is to use an adaptive approach in which the number of labels returned varies as a function of the computed ambiguity, but must average to some particular KKK over all the samples. We denote this alternative average-KKK classification. This paper formally characterizes the ambiguity profile when average-KKK classification can achieve a lower error rate than a fixed top-KKK classification. Moreover, it provides natural estimation procedures for both the fixed-size and the adaptive classifier and proves their consistency. Finally, it reports experiments on real-world image data sets revealing the benefit of average-KKK classification over top-KKK in practice. Overall, when the ambiguity is known precisely, average-KKK is never worse than top-KKK, and, in our experiments, when it is estimated, this also holds.

View on arXiv
Comments on this paper