86
10

Classification with High-Dimensional Sparse Samples

Abstract

The task of the binary classification problem is to determine which of two distributions has generated a length-nn test sequence. The two distributions are unknown; however two training sequences of length NN, one from each distribution, are observed. The distributions share an alphabet of size mm, which is significantly larger than nn and NN. How does N,n,mN,n,m affect the probability of classification error? We characterize the achievable error rate in a high-dimensional setting in which N,n,mN,n,m all tend to infinity and max{n,N}=o(m)\max\{n,N\}=o(m). The results are: * There exists an asymptotically consistent classifier if and only if m=o(min{N2,Nn})m=o(\min\{N^2,Nn\}). * The best achievable probability of classification error decays as log(Pe)=Jmin{N2,Nn}(1+o(1))/m-\log(P_e)=J \min\{N^2, Nn\}(1+o(1))/m with J>0J>0 (shown by achievability and converse results). * A weighted coincidence-based classifier has a non-zero generalized error exponent JJ. * The 2\ell_2-norm based classifier has a zero generalized error exponent.

View on arXiv
Comments on this paper