ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19978
17
6

Scaling Up Differentially Private LASSO Regularized Logistic Regression via Faster Frank-Wolfe Iterations

30 October 2023
Edward Raff
Amol Khanna
Fred Lu
ArXivPDFHTML
Abstract

To the best of our knowledge, there are no methods today for training differentially private regression models on sparse input data. To remedy this, we adapt the Frank-Wolfe algorithm for L1L_1L1​ penalized linear regression to be aware of sparse inputs and to use them effectively. In doing so, we reduce the training time of the algorithm from O(TDS+TNS)\mathcal{O}( T D S + T N S)O(TDS+TNS) to O(NS+TDlog⁡D+TS2)\mathcal{O}(N S + T \sqrt{D} \log{D} + T S^2)O(NS+TD​logD+TS2), where TTT is the number of iterations and a sparsity rate SSS of a dataset with NNN rows and DDD features. Our results demonstrate that this procedure can reduce runtime by a factor of up to 2,200×2,200\times2,200×, depending on the value of the privacy parameter ϵ\epsilonϵ and the sparsity of the dataset.

View on arXiv
Comments on this paper