ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.08169
21
1

Containerized Distributed Value-Based Multi-Agent Reinforcement Learning

15 October 2021
Siyang Wu
Tonghan Wang
Chenghao Li
Yang Hu
Chongjie Zhang
    OffRL
ArXivPDFHTML
Abstract

Multi-agent reinforcement learning tasks put a high demand on the volume of training samples. Different from its single-agent counterpart, distributed value-based multi-agent reinforcement learning faces the unique challenges of demanding data transfer, inter-process communication management, and high requirement of exploration. We propose a containerized learning framework to solve these problems. We pack several environment instances, a local learner and buffer, and a carefully designed multi-queue manager which avoids blocking into a container. Local policies of each container are encouraged to be as diverse as possible, and only trajectories with highest priority are sent to a global learner. In this way, we achieve a scalable, time-efficient, and diverse distributed MARL learning framework with high system throughput. To own knowledge, our method is the first to solve the challenging Google Research Football full game 5_v_55\_v\_55_v_5. On the StarCraft II micromanagement benchmark, our method gets 444-18×18\times18× better results compared to state-of-the-art non-distributed MARL algorithms.

View on arXiv
Comments on this paper