ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00469
35
14

Never Go Full Batch (in Stochastic Convex Optimization)

29 June 2021
I Zaghloul Amir
Y. Carmon
Tomer Koren
Roi Livni
ArXivPDFHTML
Abstract

We study the generalization performance of full-batch\text{full-batch}full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ϵ\epsilonϵ after O(1/ϵ2)O(1/\epsilon^2)O(1/ϵ2) iterations, full-batch methods either need at least Ω(1/ϵ4)\Omega(1/\epsilon^4)Ω(1/ϵ4) iterations or exhibit a dimension-dependent sample complexity.

View on arXiv
Comments on this paper