ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07572
54
31
v1v2 (latest)

Revisiting Projection-free Online Learning: the Strongly Convex Case

15 October 2020
Dan Garber
Ben Kretzu
ArXiv (abs)PDFHTML
Abstract

Projection-free optimization algorithms, which are mostly based on the classical Frank-Wolfe method, have gained significant interest in the machine learning community in recent years due to their ability to handle convex constraints that are popular in many applications, but for which computing projections is often computationally impractical in high-dimensional settings, and hence prohibit the use of most standard projection-based methods. In particular, a significant research effort was put on projection-free methods for online learning. In this paper we revisit the Online Frank-Wolfe (OFW) method suggested by Hazan and Kale \cite{Hazan12} and fill a gap that has been left unnoticed for several years: OFW achieves a faster rate of O(T2/3)O(T^{2/3})O(T2/3) on strongly convex functions (as opposed to the standard O(T3/4)O(T^{3/4})O(T3/4) for convex but not strongly convex functions), where TTT is the sequence length. This is somewhat surprising since it is known that for offline optimization, in general, strong convexity does not lead to faster rates for Frank-Wolfe. We also revisit the bandit setting under strong convexity and prove a similar bound of O~(T2/3)\tilde O(T^{2/3})O~(T2/3) (instead of O(T3/4)O(T^{3/4})O(T3/4) without strong convexity). Hence, in the current state-of-affairs, the best projection-free upper-bounds for the full-information and bandit settings with strongly convex and nonsmooth functions match, up to logarithmic factors, in TTT.

View on arXiv
Comments on this paper