ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.11141
34
49
v1v2 (latest)

On the Consistency of Top-k Surrogate Losses

30 January 2019
Forest Yang
Oluwasanmi Koyejo
ArXiv (abs)PDFHTML
Abstract

The top-kkk error is often employed to evaluate performance for challenging classification tasks in computer vision as it is designed to compensate for ambiguity in ground truth labels. This practical success motivates our theoretical analysis of consistent top-kkk classification. Surprisingly, it is not rigorously understood when taking the kkk-argmax of a vector is guaranteed to return the kkk-argmax of another vector, though doing so is crucial to describe Bayes optimality; we do both tasks. Then, we define top-kkk calibration and show it is necessary and sufficient for consistency. Based on the top-kkk calibration analysis, we propose a class of top-kkk calibrated Bregman divergence surrogates. Our analysis continues by showing previously proposed hinge-like top-kkk surrogate losses are not top-kkk calibrated and suggests no convex hinge loss is top-kkk calibrated. On the other hand, we propose a new hinge loss which is consistent. We explore further, showing our hinge loss remains consistent under a restriction to linear functions, while cross entropy does not. Finally, we exhibit a differentiable, convex loss function which is top-kkk calibrated for specific kkk.

View on arXiv
Comments on this paper