Finite sample rates for logistic regression with small noise or few samples

The logistic regression estimator is known to inflate the magnitude of its coefficients if the sample size is small, the dimension is (moderately) large or the signal-to-noise ratio is large (probabilities of observing a label are close to 0 or 1). With this in mind, we study the logistic regression estimator with , assuming Gaussian covariates and labels generated by the Gaussian link function, with a mild optimization constraint on the estimator's length to ensure existence. We provide finite sample guarantees for its direction, which serves as a classifier, and its Euclidean norm, which is an estimator for the signal-to-noise ratio. We distinguish between two regimes. In the low-noise/small-sample regime (), we show that the estimator's direction (and consequentially the classification error) achieve the rate - up to the log term as if the problem was noiseless. In this case, the norm of the estimator is at least of order . If instead , the estimator's direction achieves the rate , whereas its norm converges to the true norm at the rate . As a corollary, the data are not linearly separable with high probability in this regime. In either regime, logistic regression provides a competitive classifier.
View on arXiv