Optimal Embedding Dimension for Sparse Subspace Embeddings

A random matrix is an oblivious subspace embedding (OSE) with parameters , and , if for any -dimensional subspace , It is known that the embedding dimension of an OSE must satisfy , and for any , a Gaussian embedding matrix with is an OSE with . However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having non-zeros per column, with applications to problems such as least squares regression and low-rank approximation. We show that, given any , an random matrix with consisting of randomly sparsified entries and having non-zeros per column, is an oblivious subspace embedding with . Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve embedding dimension, and it improves on shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression. We further extend our results to Leverage Score Sparsification (LESS), which is a recently introduced non-oblivious embedding technique. We use LESS to construct the first subspace embedding with low distortion and optimal embedding dimension that can be applied in current matrix multiplication time.
View on arXiv