ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.00339
27
36

Sample-Optimal Low-Rank Approximation of Distance Matrices

2 June 2019
Piotr Indyk
A. Vakilian
Tal Wagner
David P. Woodruff
ArXivPDFHTML
Abstract

A distance matrix A∈Rn×mA \in \mathbb R^{n \times m}A∈Rn×m represents all pairwise distances, Aij=d(xi,yj)A_{ij}=\mathrm{d}(x_i,y_j)Aij​=d(xi​,yj​), between two point sets x1,...,xnx_1,...,x_nx1​,...,xn​ and y1,...,ymy_1,...,y_my1​,...,ym​ in an arbitrary metric space (Z,d)(\mathcal Z, \mathrm{d})(Z,d). Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-kkk approximation of a distance matrix in time O((n+m)1+γ)⋅poly(k,1/ϵ)O((n+m)^{1+\gamma}) \cdot \mathrm{poly}(k,1/\epsilon)O((n+m)1+γ)⋅poly(k,1/ϵ), where ϵ>0\epsilon>0ϵ>0 is an error parameter and γ>0\gamma>0γ>0 is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachievable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only O((n+m)k/ϵ)O((n+m) k/\epsilon)O((n+m)k/ϵ) entries of the input matrix, and has a running time of O(n+m)⋅poly(k,1/ϵ)O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)O(n+m)⋅poly(k,1/ϵ). We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be read by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm.

View on arXiv
Comments on this paper