Random Projections for Support Vector Machines

Abstract
Let be a data matrix of rank , representing points in . The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix \mathbf{X}. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within \math{\epsilon}-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.
View on arXivComments on this paper