ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.08426
24
21

Leverage Score Sampling for Faster Accelerated Regression and ERM

22 November 2017
Naman Agarwal
Sham Kakade
Rahul Kidambi
Y. Lee
Praneeth Netrapalli
Aaron Sidford
ArXivPDFHTML
Abstract

Given a matrix A∈Rn×d\mathbf{A}\in\mathbb{R}^{n\times d}A∈Rn×d and a vector b∈Rdb \in\mathbb{R}^{d}b∈Rd, we show how to compute an ϵ\epsilonϵ-approximate solution to the regression problem min⁡x∈Rd12∥Ax−b∥22 \min_{x\in\mathbb{R}^{d}}\frac{1}{2} \|\mathbf{A} x - b\|_{2}^{2} minx∈Rd​21​∥Ax−b∥22​ in time O~((n+d⋅κsum)⋅s⋅log⁡ϵ−1) \tilde{O} ((n+\sqrt{d\cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1}) O~((n+d⋅κsum​​)⋅s⋅logϵ−1) where κsum=tr(A⊤A)/λmin⁡(ATA)\kappa_{\text{sum}}=\mathrm{tr}\left(\mathbf{A}^{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}^{T}\mathbf{A})κsum​=tr(A⊤A)/λmin​(ATA) and sss is the maximum number of non-zero entries in a row of A\mathbf{A}A. Our algorithm improves upon the previous best running time of O~((n+n⋅κsum)⋅s⋅log⁡ϵ−1) \tilde{O} ((n+\sqrt{n \cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})O~((n+n⋅κsum​​)⋅s⋅logϵ−1). We achieve our result through a careful combination of leverage score sampling techniques, proximal point methods, and accelerated coordinate descent. Our method not only matches the performance of previous methods, but further improves whenever leverage scores of rows are small (up to polylogarithmic factors). We also provide a non-linear generalization of these results that improves the running time for solving a broader class of ERM problems.

View on arXiv
Comments on this paper