ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1207.4684
69
98

The Fast Cauchy Transform and Faster Robust Linear Regression

19 July 2012
K. Clarkson
P. Drineas
M. Magdon-Ismail
Michael W. Mahoney
Xiangrui Meng
David P. Woodruff
    OOD
ArXivPDFHTML
Abstract

We provide fast algorithms for overconstrained ℓp\ell_pℓp​ regression and related problems: for an n×dn\times dn×d input matrix AAA and vector b∈Rnb\in\mathbb{R}^nb∈Rn, in O(ndlog⁡n)O(nd\log n)O(ndlogn) time we reduce the problem min⁡x∈Rd∥Ax−b∥p\min_{x\in\mathbb{R}^d} \|Ax-b\|_pminx∈Rd​∥Ax−b∥p​ to the same problem with input matrix A~\tilde AA~ of dimension s×ds \times ds×d and corresponding b~\tilde bb~ of dimension s×1s\times 1s×1. Here, A~\tilde AA~ and b~\tilde bb~ are a coreset for the problem, consisting of sampled and rescaled rows of AAA and bbb; and sss is independent of nnn and polynomial in ddd. Our results improve on the best previous algorithms when n≫dn\gg dn≫d, for all p∈[1,∞)p\in[1,\infty)p∈[1,∞) except p=2p=2p=2. We also provide a suite of improved results for finding well-conditioned bases via ellipsoidal rounding, illustrating tradeoffs between running time and conditioning quality, including a one-pass conditioning algorithm for general ℓp\ell_pℓp​ problems. We also provide an empirical evaluation of implementations of our algorithms for p=1p=1p=1, comparing them with related algorithms. Our empirical results show that, in the asymptotic regime, the theory is a very good guide to the practical performance of these algorithms. Our algorithms use our faster constructions of well-conditioned bases for ℓp\ell_pℓp​ spaces and, for p=1p=1p=1, a fast subspace embedding of independent interest that we call the Fast Cauchy Transform: a distribution over matrices Π:Rn↦RO(dlog⁡d)\Pi:\mathbb{R}^n\mapsto \mathbb{R}^{O(d\log d)}Π:Rn↦RO(dlogd), found obliviously to AAA, that approximately preserves the ℓ1\ell_1ℓ1​ norms: that is, with large probability, simultaneously for all xxx, ∥Ax∥1≈∥ΠAx∥1\|Ax\|_1 \approx \|\Pi Ax\|_1∥Ax∥1​≈∥ΠAx∥1​, with distortion O(d2+η)O(d^{2+\eta})O(d2+η), for an arbitrarily small constant η>0\eta>0η>0; and, moreover, ΠA\Pi AΠA can be computed in O(ndlog⁡d)O(nd\log d)O(ndlogd) time. The techniques underlying our Fast Cauchy Transform include fast Johnson-Lindenstrauss transforms, low-coherence matrices, and rescaling by Cauchy random variables.

View on arXiv
Comments on this paper