Exact high-dimensional asymptotics for Support Vector Machine

The Support Vector Machine (SVM) is one of the most widely used classification methods. In this paper, we consider the soft-margin SVM used on data points with independent features, where the sample size and the feature dimension grows to in a fixed ratio . We propose a set of equations that exactly characterizes the asymptotic behavior of support vector machine. In particular, we give exact formulas for (1) the variability of the optimal coefficients, (2) the proportion of data points lying on the margin boundary (i.e. number of support vectors), (3) the final objective function value, and (4) the expected misclassification error on new data points, which in particular implies the exact formula for the optimal tuning parameter given a data generating mechanism. We first establish these formulas in the case where the label is independent of the feature . Then the results are generalized to the case where the label is allowed to have a general dependence on the feature through a linear combination . These formulas for the non-smooth hinge loss are analogous to the recent results in \citep{sur2018modern} for smooth logistic loss. Our approach is based on heuristic leave-one-out calculations.
View on arXiv