We present -loss, , a tunable loss function for binary classification that bridges log-loss () and - loss (). We prove that -loss has an equivalent margin-based form and is classification-calibrated, two desirable properties for a good surrogate loss function for the ideal yet intractable - loss. For logistic regression-based classification, we provide an upper bound on the difference between the empirical and expected risk at the empirical risk minimizers for -loss by exploiting its Lipschitzianity along with recent results on the landscape features of empirical risk functions. Finally, we show that -loss with performs better than log-loss on MNIST for logistic regression.
View on arXiv