91
51

Random Projections for Support Vector Machines

Abstract

Let XRn×d\mathbf{X} \in \mathbb{R}^{n \times d} be a data matrix of rank ρ\rho, representing nn points in Rd\mathbb{R}^d. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix \mathbf{X}. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within \math{\epsilon}-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.

View on arXiv
Comments on this paper