ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01538
58
0
v1v2 (latest)

LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation

2 June 2025
Guobin Zhu
Rui Zhou
Wenkang Ji
Shiyu Zhao
ArXiv (abs)PDFHTML
Main:7 Pages
8 Figures
Bibliography:1 Pages
2 Tables
Abstract

Although Multi-Agent Reinforcement Learning (MARL) is effective for complex multi-robot tasks, it suffers from low sample efficiency and requires iterative manual reward tuning. Large Language Models (LLMs) have shown promise in single-robot settings, but their application in multi-robot systems remains largely unexplored. This paper introduces a novel LLM-Aided MARL (LAMARL) approach, which integrates MARL with LLMs, significantly enhancing sample efficiency without requiring manual design. LAMARL consists of two modules: the first module leverages LLMs to fully automate the generation of prior policy and reward functions. The second module is MARL, which uses the generated functions to guide robot policy training effectively. On a shape assembly benchmark, both simulation and real-world experiments demonstrate the unique advantages of LAMARL. Ablation studies show that the prior policy improves sample efficiency by an average of 185.9% and enhances task completion, while structured prompts based on Chain-of-Thought (CoT) and basic APIs improve LLM output success rates by 28.5%-67.5%. Videos and code are available atthis https URL

View on arXiv
@article{zhu2025_2506.01538,
  title={ LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation },
  author={ Guobin Zhu and Rui Zhou and Wenkang Ji and Shiyu Zhao },
  journal={arXiv preprint arXiv:2506.01538},
  year={ 2025 }
}
Comments on this paper