ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.15941
41
0

SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning

28 January 2025
Jingruo Sun
Zachary Frangella
Madeleine Udell
ArXivPDFHTML
Abstract

Regularized empirical risk minimization (rERM) has become important in data-intensive fields such as genomics and advertising, with stochastic gradient methods typically used to solve the largest problems. However, ill-conditioned objectives and non-smooth regularizers undermine the performance of traditional stochastic gradient methods, leading to slow convergence and significant computational costs. To address these challenges, we propose the SAPPHIRE\texttt{SAPPHIRE}SAPPHIRE (S\textbf{S}Sketching-based A\textbf{A}Approximations for P\textbf{P}Proximal P\textbf{P}Preconditioning and H\textbf{H}Hessian I\textbf{I}Inexactness with Variance-RE\textbf{RE}REeduced Gradients) algorithm, which integrates sketch-based preconditioning to tackle ill-conditioning and uses a scaled proximal mapping to minimize the non-smooth regularizer. This stochastic variance-reduced algorithm achieves condition-number-free linear convergence to the optimum, delivering an efficient and scalable solution for ill-conditioned composite large-scale convex machine learning problems. Extensive experiments on lasso and logistic regression demonstrate that SAPPHIRE\texttt{SAPPHIRE}SAPPHIRE often converges 202020 times faster than other common choices such as Catalyst\texttt{Catalyst}Catalyst, SAGA\texttt{SAGA}SAGA, and SVRG\texttt{SVRG}SVRG. This advantage persists even when the objective is non-convex or the preconditioner is infrequently updated, highlighting its robust and practical effectiveness.

View on arXiv
Comments on this paper