ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.03633
17
5

Streaming Complexity of SVMs

7 July 2020
Alexandr Andoni
Collin Burns
Yi Li
S. Mahabadi
David P. Woodruff
ArXivPDFHTML
Abstract

We study the space complexity of solving the bias-regularized SVM problem in the streaming model. This is a classic supervised learning problem that has drawn lots of attention, including for developing fast algorithms for solving the problem approximately. One of the most widely used algorithms for approximately optimizing the SVM objective is Stochastic Gradient Descent (SGD), which requires only O(1λϵ)O(\frac{1}{\lambda\epsilon})O(λϵ1​) random samples, and which immediately yields a streaming algorithm that uses O(dλϵ)O(\frac{d}{\lambda\epsilon})O(λϵd​) space. For related problems, better streaming algorithms are only known for smooth functions, unlike the SVM objective that we focus on in this work. We initiate an investigation of the space complexity for both finding an approximate optimum of this objective, and for the related ``point estimation'' problem of sketching the data set to evaluate the function value FλF_\lambdaFλ​ on any query (θ,b)(\theta, b)(θ,b). We show that, for both problems, for dimensions d=1,2d=1,2d=1,2, one can obtain streaming algorithms with space polynomially smaller than 1λϵ\frac{1}{\lambda\epsilon}λϵ1​, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM, and which is known to be tight in general, even for d=1d=1d=1. We also prove polynomial lower bounds for both point estimation and optimization. In particular, for point estimation we obtain a tight bound of Θ(1/ϵ)\Theta(1/\sqrt{\epsilon})Θ(1/ϵ​) for d=1d=1d=1 and a nearly tight lower bound of Ω~(d/ϵ2)\widetilde{\Omega}(d/{\epsilon}^2)Ω(d/ϵ2) for d=Ω(log⁡(1/ϵ))d = \Omega( \log(1/\epsilon))d=Ω(log(1/ϵ)). Finally, for optimization, we prove a Ω(1/ϵ)\Omega(1/\sqrt{\epsilon})Ω(1/ϵ​) lower bound for d=Ω(log⁡(1/ϵ))d = \Omega( \log(1/\epsilon))d=Ω(log(1/ϵ)), and show similar bounds when ddd is constant.

View on arXiv
Comments on this paper