Beyond Worst-Case Dimensionality Reduction for Sparse Vectors

We study beyond worst-case dimensionality reduction for -sparse vectors. Our work is divided into two parts, each focusing on a different facet of beyond worst-case analysis:We first consider average-case guarantees. A folklore upper bound based on the birthday-paradox states: For any collection of -sparse vectors in , there exists a linear map to which \emph{exactly} preserves the norm of of the vectors in in any norm (as opposed to the usual setting where guarantees hold for all vectors). We give lower bounds showing that this is indeed optimal in many settings: any oblivious linear map satisfying similar average-case guarantees must map to dimensions. The same lower bound also holds for a wide class of smooth maps, including `encoder-decoder schemes', where we compare the norm of the original vector to that of a smooth function of the embedding. These lower bounds reveal a separation result, as an upper bound of is possible if we instead use arbitrary (possibly non-smooth) functions, e.g., via compressed sensing algorithms.Given these lower bounds, we specialize to sparse \emph{non-negative} vectors. For a dataset of non-negative -sparse vectors and any , we can non-linearly embed to dimensions while preserving all pairwise distances in norm up to , with no dependence on . Surprisingly, the non-negativity assumption enables much smaller embeddings than arbitrary sparse vectors, where the best known bounds suffer exponential dependence. Our map also guarantees \emph{exact} dimensionality reduction for by embedding into dimensions, which is tight. We show that both the non-linearity of and the non-negativity of are necessary, and provide downstream algorithmic improvements.
View on arXiv@article{silwal2025_2502.19865, title={ Beyond Worst-Case Dimensionality Reduction for Sparse Vectors }, author={ Sandeep Silwal and David P. Woodruff and Qiuyi Zhang }, journal={arXiv preprint arXiv:2502.19865}, year={ 2025 } }