Positive-Unlabeled Learning with Non-Negative Risk Estimator

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning. Unbiased PU learning that is based on unbiased risk estimators is now state of the art. However, if its model is very flexible, its empirical risk on training data will go negative, and we will suffer from overfitting seriously. In this paper, we propose a novel non-negative risk estimator for PU learning. When being minimized, it is more robust against overfitting, and thus we are able to train very flexible models given limited P data. Moreover, we analyze the bias, consistency and mean-squared-error reduction of the proposed risk estimator as well as the estimation error of the corresponding risk minimizer. Experiments show that the non-negative risk estimator outperforms unbiased counterparts when they disagree.
View on arXiv