ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1510.00452
125
11
v1v2v3v4v5 (latest)

Aggregating Binary Classifiers Optimally with General Losses

1 October 2015
Akshay Balsubramani
Y. Freund
ArXiv (abs)PDFHTML
Abstract

We develop a worst-case analysis of aggregation of ensembles of binary classifiers in a semi-supervised setting, for a broad class of losses including but not limited to all convex surrogates. The result is a family of parameter-free ensemble aggregation algorithms which use labeled and unlabeled data; these are as efficient as linear learning and prediction for convex risk minimization but work without any relaxations on many nonconvex losses like the 0-1 loss. The prediction algorithms take a familiar form, applying "inverse link functions" to a generalized notion of ensemble margin, but without the assumptions typically made in margin-based learning. All this structure follows from interpreting loss minimization as a game played over unlabeled data.

View on arXiv
Comments on this paper