ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00270
29
1

Provably Efficient Lifelong Reinforcement Learning with Linear Function Approximation

1 June 2022
Sanae Amani
Lin F. Yang
Ching-An Cheng
    OffRL
ArXivPDFHTML
Abstract

We study lifelong reinforcement learning (RL) in a regret minimization setting of linear contextual Markov decision process (MDP), where the agent needs to learn a multi-task policy while solving a streaming sequence of tasks. We propose an algorithm, called UCB Lifelong Value Distillation (UCBlvd), that provably achieves sublinear regret for any sequence of tasks, which may be adaptively chosen based on the agent's past behaviors. Remarkably, our algorithm uses only sublinear number of planning calls, which means that the agent eventually learns a policy that is near optimal for multiple tasks (seen or unseen) without the need of deliberate planning. A key to this property is a new structural assumption that enables computation sharing across tasks during exploration. Specifically, for KKK task episodes of horizon HHH, our algorithm has a regret bound O~((d3+d′d)H4K)\tilde{\mathcal{O}}(\sqrt{(d^3+d^\prime d)H^4K})O~((d3+d′d)H4K​) based on O(dHlog⁡(K))\mathcal{O}(dH\log(K))O(dHlog(K)) number of planning calls, where ddd and d′d^\primed′ are the feature dimensions of the dynamics and rewards, respectively. This theoretical guarantee implies that our algorithm can enable a lifelong learning agent to accumulate experiences and learn to rapidly solve new tasks.

View on arXiv
Comments on this paper