ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11373
13
2

Multi-task Batch Reinforcement Learning with Metric Learning

25 September 2019
Jiachen Li
Q. Vuong
Shuang Liu
Minghua Liu
K. Ciosek
Keith Ross
Henrik I. Christensen
H. Su
    OffRL
ArXivPDFHTML
Abstract

We tackle the Multi-task Batch Reinforcement Learning problem. Given multiple datasets collected from different tasks, we train a multi-task policy to perform well in unseen tasks sampled from the same distribution. The task identities of the unseen tasks are not provided. To perform well, the policy must infer the task identity from collected transitions by modelling its dependency on states, actions and rewards. Because the different datasets may have state-action distributions with large divergence, the task inference module can learn to ignore the rewards and spuriously correlate only\textit{only}only state-action pairs to the task identity, leading to poor test time performance. To robustify task inference, we propose a novel application of the triplet loss. To mine hard negative examples, we relabel the transitions from the training tasks by approximating their reward functions. When we allow further training on the unseen tasks, using the trained policy as an initialization leads to significantly faster convergence compared to randomly initialized policies (up to 80%80\%80% improvement and across 5 different Mujoco task distributions). We name our method MBML\textbf{MBML}MBML (Multi-task\textbf{M}\text{ulti-task}Multi-task Batch\textbf{B}\text{atch}Batch RL with Metric\textbf{M}\text{etric}Metric Learning\textbf{L}\text{earning}Learning).

View on arXiv
Comments on this paper