ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.08628
58
60

Regret Bounds for Lifelong Learning

27 October 2016
Pierre Alquier
The Tien Mai
Massimiliano Pontil
    CLL
ArXivPDFHTML
Abstract

We consider the problem of transfer learning in an online setting. Different tasks are presented sequentially and processed by a within-task algorithm. We propose a lifelong learning strategy which refines the underlying data representation used by the within-task algorithm, thereby transferring information from one task to the next. We show that when the within-task algorithm comes with some regret bound, our strategy inherits this good property. Our bounds are in expectation for a general loss function, and uniform for a convex loss. We discuss applications to dictionary learning and finite set of predictors. In the latter case, we improve previous O(1/m)O(1/\sqrt{m})O(1/m​) bounds to O(1/m)O(1/m)O(1/m) where mmm is the per task sample size.

View on arXiv
Comments on this paper