ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05126
44
1

Multi-Task Reinforcement Learning Enables Parameter Scaling

7 March 2025
Reginald McLean
Evangelos Chataroulas
Jordan Terry
Isaac Woungang
Nariman Farsad
Pablo Samuel Castro
    LRM
ArXivPDFHTML
Abstract

Multi-task reinforcement learning (MTRL) aims to endow a single agent with the ability to perform well on multiple tasks. Recent works have focused on developing novel sophisticated architectures to improve performance, often resulting in larger models; it is unclear, however, whether the performance gains are a consequence of the architecture design itself or the extra parameters. We argue that gains are mostly due to scale by demonstrating that naively scaling up a simple MTRL baseline to match parameter counts outperforms the more sophisticated architectures, and these gains benefit most from scaling the critic over the actor. Additionally, we explore the training stability advantages that come with task diversity, demonstrating that increasing the number of tasks can help mitigate plasticity loss. Our findings suggest that MTRL's simultaneous training across multiple tasks provides a natural framework for beneficial parameter scaling in reinforcement learning, challenging the need for complex architectural innovations.

View on arXiv
@article{mclean2025_2503.05126,
  title={ Multi-Task Reinforcement Learning Enables Parameter Scaling },
  author={ Reginald McLean and Evangelos Chatzaroulas and Jordan Terry and Isaac Woungang and Nariman Farsad and Pablo Samuel Castro },
  journal={arXiv preprint arXiv:2503.05126},
  year={ 2025 }
}
Comments on this paper