ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.02035
15
18

Ranking with Abstention

5 July 2023
Anqi Mao
M. Mohri
Yutao Zhong
ArXivPDFHTML
Abstract

We introduce a novel framework of ranking with abstention, where the learner can abstain from making prediction at some limited cost ccc. We present a extensive theoretical analysis of this framework including a series of HHH-consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. These theoretical guarantees are the state-of-the-art consistency guarantees in the literature, which are upper bounds on the target loss estimation error of a predictor in a hypothesis set HHH, expressed in terms of the surrogate loss estimation error of that predictor. We further argue that our proposed abstention methods are important when using common equicontinuous hypothesis sets in practice. We report the results of experiments illustrating the effectiveness of ranking with abstention.

View on arXiv
Comments on this paper