16
14

Towards a statistical theory of data selection under weak supervision

Abstract

Given a sample of size NN, it is often useful to select a subsample of smaller size n<Nn<N to be used for statistical estimation or learning. Such a data selection step is useful to reduce the requirements of data labeling and the computational complexity of learning. We assume to be given NN unlabeled samples {xi}iN\{{\boldsymbol x}_i\}_{i\le N}, and to be given access to a `surrogate model' that can predict labels yiy_i better than random guessing. Our goal is to select a subset of the samples, to be denoted by {xi}iG\{{\boldsymbol x}_i\}_{i\in G}, of size G=n<N|G|=n<N. We then acquire labels for this set and we use them to train a model via regularized empirical risk minimization. By using a mixture of numerical experiments on real and synthetic data, and mathematical derivations under low- and high- dimensional asymptotics, we show that: (i)(i)~Data selection can be very effective, in particular beating training on the full sample in some cases; (ii)(ii)~Certain popular choices in data selection methods (e.g. unbiased reweighted subsampling, or influence function-based subsampling) can be substantially suboptimal.

View on arXiv
Comments on this paper