ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.05630
22
11

Scale-free Unconstrained Online Learning for Curved Losses

11 February 2022
J. Mayo
Hédi Hadiji
T. Erven
ArXivPDFHTML
Abstract

A sequence of works in unconstrained online convex optimisation have investigated the possibility of adapting simultaneously to the norm UUU of the comparator and the maximum norm GGG of the gradients. In full generality, matching upper and lower bounds are known which show that this comes at the unavoidable cost of an additive GU3G U^3GU3, which is not needed when either GGG or UUU is known in advance. Surprisingly, recent results by Kempka et al. (2019) show that no such price for adaptivity is needed in the specific case of 111-Lipschitz losses like the hinge loss. We follow up on this observation by showing that there is in fact never a price to pay for adaptivity if we specialise to any of the other common supervised online learning losses: our results cover log loss, (linear and non-parametric) logistic regression, square loss prediction, and (linear and non-parametric) least-squares regression. We also fill in several gaps in the literature by providing matching lower bounds with an explicit dependence on UUU. In all cases we obtain scale-free algorithms, which are suitably invariant under rescaling of the data. Our general goal is to establish achievable rates without concern for computational efficiency, but for linear logistic regression we also provide an adaptive method that is as efficient as the recent non-adaptive algorithm by Agarwal et al. (2021).

View on arXiv
Comments on this paper