ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12441
33
5

Joint Representation Training in Sequential Tasks with Shared Structure

24 June 2022
Aldo Pacchiano
Ofir Nachum
Nilseh Tripuraneni
Peter L. Bartlett
ArXivPDFHTML
Abstract

Classical theory in reinforcement learning (RL) predominantly focuses on the single task setting, where an agent learns to solve a task through trial-and-error experience, given access to data only from that task. However, many recent empirical works have demonstrated the significant practical benefits of leveraging a joint representation trained across multiple, related tasks. In this work we theoretically analyze such a setting, formalizing the concept of task relatedness as a shared state-action representation that admits linear dynamics in all the tasks. We introduce the Shared-MatrixRL algorithm for the setting of Multitask MatrixRL. In the presence of PPP episodic tasks of dimension ddd sharing a joint r≪dr \ll dr≪d low-dimensional representation, we show the regret on the the PPP tasks can be improved from O(PHdNH)O(PHd\sqrt{NH})O(PHdNH​) to O((HdrP+HPrd)NH)O((Hd\sqrt{rP} + HP\sqrt{rd})\sqrt{NH})O((HdrP​+HPrd​)NH​) over NNN episodes of horizon HHH. These gains coincide with those observed in other linear models in contextual bandits and RL. In contrast with previous work that have studied multi task RL in other function approximation models, we show that in the presence of bilinear optimization oracle and finite state action spaces there exists a computationally efficient algorithm for multitask MatrixRL via a reduction to quadratic programming. We also develop a simple technique to shave off a H\sqrt{H}H​ factor from the regret upper bounds of some episodic linear problems.

View on arXiv
Comments on this paper