ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.09350
152
8

Fully Implicit Online Learning

25 September 2018
Chaobing Song
Ji Liu
Han Liu
Yong Jiang
Tong Zhang
    FedML
ArXivPDFHTML
Abstract

Regularized online learning is widely used in machine learning applications. In online learning, performing exact minimization (i.e.,i.e.,i.e., implicit update) is known to be beneficial to the numerical stability and structure of solution. In this paper we study a class of regularized online algorithms without linearizing the loss function or the regularizer, which we call \emph{fully implicit online learning} (FIOL). We show that for arbitrary Bregman divergence, FIOL has the O(T)O(\sqrt{T})O(T​) regret for general convex setting and O(log⁡T)O(\log T)O(logT) regret for strongly convex setting, and the regret has an one-step improvement effect because it avoids the approximation error of linearization. Then we propose efficient algorithms to solve the subproblem of FIOL. We show that even if the solution of the subproblem has no closed form, it can be solved with complexity comparable to the linearized online algoritms. Experiments validate the proposed approaches.

View on arXiv
Comments on this paper