ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.01722
8
6

Fast and Near-Optimal Diagonal Preconditioning

4 August 2020
A. Jambulapati
Jingkai Li
Christopher Musco
Aaron Sidford
Kevin Tian
ArXivPDFHTML
Abstract

The convergence rates of iterative methods for solving a linear system Ax=b\mathbf{A} x = bAx=b typically depend on the condition number of the matrix A\mathbf{A}A. Preconditioning is a common way of speeding up these methods by reducing that condition number in a computationally inexpensive way. In this paper, we revisit the decades-old problem of how to best improve A\mathbf{A}A's condition number by left or right diagonal rescaling. We make progress on this problem in several directions. First, we provide new bounds for the classic heuristic of scaling A\mathbf{A}A by its diagonal values (a.k.a. Jacobi preconditioning). We prove that this approach reduces A\mathbf{A}A's condition number to within a quadratic factor of the best possible scaling. Second, we give a solver for structured mixed packing and covering semidefinite programs (MPC SDPs) which computes a constant-factor optimal scaling for A\mathbf{A}A in O~(nnz(A)⋅poly(κ⋆))\widetilde{O}(\text{nnz}(\mathbf{A}) \cdot \text{poly}(\kappa^\star))O(nnz(A)⋅poly(κ⋆)) time; this matches the cost of solving the linear system after scaling up to a O~(poly(κ⋆))\widetilde{O}(\text{poly}(\kappa^\star))O(poly(κ⋆)) factor. Third, we demonstrate that a sufficiently general width-independent MPC SDP solver would imply near-optimal runtimes for the scaling problems we consider, and natural variants concerned with measures of average conditioning. Finally, we highlight connections of our preconditioning techniques to semi-random noise models, as well as applications in reducing risk in several statistical regression models.

View on arXiv
Comments on this paper