For a tall matrix and a random sketching matrix , the sketched estimate of the inverse covariance matrix is typically biased: , where . This phenomenon, which we call inversion bias, arises, e.g., in statistics and distributed optimization, when averaging multiple independently constructed estimates of quantities that depend on the inverse covariance. We develop a framework for analyzing inversion bias, based on our proposed concept of an -unbiased estimator for random matrices. We show that when the sketching matrix is dense and has i.i.d. sub-gaussian entries, then after simple rescaling, the estimator is -unbiased for with a sketch of size . This implies that for , the inversion bias of this estimator is , which is much smaller than the approximation error obtained as a consequence of the subspace embedding guarantee for sub-gaussian sketches. We then propose a new sketching technique, called LEverage Score Sparsified (LESS) embeddings, which uses ideas from both data-oblivious sparse embeddings as well as data-aware leverage-based row sampling methods, to get inversion bias for sketch size in time , where nnz is the number of non-zeros. The key techniques enabling our analysis include an extension of a classical inequality of Bai and Silverstein for random quadratic forms, which we call the Restricted Bai-Silverstein inequality; and anti-concentration of the Binomial distribution via the Paley-Zygmund inequality, which we use to prove a lower bound showing that leverage score sampling sketches generally do not achieve small inversion bias.
View on arXiv