310

On-line Learning with Abstention

Abstract

We introduce and analyze an on-line learning setting where the learner has the added option of abstaining from making a prediction at the price of a fixed cost. When the learner abstains, no feedback is provided, and she does not receive the label associated with the example. We design several algorithms and derive regret guarantees in both the adversarial and stochastic loss setting. In the process, we derive a new bound for on-line learning with feedback graphs that generalizes and extends existing work. We also design a new algorithm for on-line learning with sleeping experts that takes advantage of time-varying feedback graphs. We present natural extensions of existing algorithms as a baseline, and we then design more sophisticated algorithms that explicitly exploit the structure of our problem. We empirically validate the improvement of these more sophisticated algorithms on several datasets.

View on arXiv
Comments on this paper