We consider the problem of learning reject option classifiers. The goodness of a reject option classifier is quantified using loss function wherein a loss is assigned for rejection. In this paper, we propose {\em double ramp loss} function which gives a continuous upper bound for loss. Our approach is based on minimizing regularized risk under the double ramp loss using {\em difference of convex (DC) programming}. We show the effectiveness of our approach through experiments on synthetic and benchmark datasets. Our approach performs better than the state of the art reject option classification approaches.
View on arXiv