ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.01433
24
18

Tight Sensitivity Bounds For Smaller Coresets

2 July 2019
Alaa Maalouf
Adiel Statman
Dan Feldman
ArXivPDFHTML
Abstract

An ε\varepsilonε-coreset for Least-Mean-Squares (LMS) of a matrix A∈Rn×dA\in{\mathbb{R}}^{n\times d}A∈Rn×d is a small weighted subset of its rows that approximates the sum of squared distances from its rows to every affine kkk-dimensional subspace of Rd{\mathbb{R}}^dRd, up to a factor of 1±ε1\pm\varepsilon1±ε. Such coresets are useful for hyper-parameter tuning and solving many least-mean-squares problems such as low-rank approximation (kkk-SVD), kkk-PCA, Lassso/Ridge/Linear regression and many more. Coresets are also useful for handling streaming, dynamic and distributed big data in parallel. With high probability, non-uniform sampling based on upper bounds on what is known as importance or sensitivity of each row in AAA yields a coreset. The size of the (sampled) coreset is then near-linear in the total sum of these sensitivity bounds. We provide algorithms that compute provably \emph{tight} bounds for the sensitivity of each input row. It is based on two ingredients: (i) iterative algorithm that computes the exact sensitivity of each point up to arbitrary small precision for (non-affine) kkk-subspaces, and (ii) a general reduction of independent interest from computing sensitivity for the family of affine kkk-subspaces in Rd{\mathbb{R}}^dRd to (non-affine) (k+1)(k+1)(k+1)- subspaces in Rd+1{\mathbb{R}}^{d+1}Rd+1. Experimental results on real-world datasets, including the English Wikipedia documents-term matrix, show that our bounds provide significantly smaller and data-dependent coresets also in practice. Full open source is also provided.

View on arXiv
Comments on this paper