15
0

Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures

Main:8 Pages
6 Figures
Bibliography:4 Pages
2 Tables
Appendix:11 Pages
Abstract

Randomized dimensionality reduction is a widely-used algorithmic technique for speeding up large-scale Euclidean optimization problems. In this paper, we study dimension reduction for a variety of maximization problems, including max-matching, max-spanning tree, max TSP, as well as various measures for dataset diversity. For these problems, we show that the effect of dimension reduction is intimately tied to the \emph{doubling dimension} λX\lambda_X of the underlying dataset XX -- a quantity measuring intrinsic dimensionality of point sets. Specifically, we prove that a target dimension of O(λX)O(\lambda_X) suffices to approximately preserve the value of any near-optimal solution,which we also show is necessary for some of these problems. This is in contrast to classical dimension reduction results, whose dependence increases with the dataset size X|X|. We also provide empirical results validating the quality of solutions found in the projected space, as well as speedups due to dimensionality reduction.

View on arXiv
@article{gao2025_2506.00165,
  title={ Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures },
  author={ Jie Gao and Rajesh Jayaram and Benedikt Kolbe and Shay Sapir and Chris Schwiegelshohn and Sandeep Silwal and Erik Waingarten },
  journal={arXiv preprint arXiv:2506.00165},
  year={ 2025 }
}
Comments on this paper