ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17950
117
0

Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning

24 April 2025
Isadora White
Kolby Nottingham
Ayush Maniar
Max Robinson
Hansen Lillemark
Mehul Maheshwari
Lianhui Qin
Prithviraj Ammanabrolu
    LLMAG
    LM&Ro
ArXivPDFHTML
Abstract

Collaboration is ubiquitous and essential in day-to-day life -- from exchanging ideas, to delegating tasks, to generating plans together. This work studies how LLMs can adaptively collaborate to perform complex embodied reasoning tasks. To this end we introduce MINDcraft, an easily extensible platform built to enable LLM agents to control characters in the open-world game of Minecraft; and MineCollab, a benchmark to test the different dimensions of embodied and collaborative reasoning. An experimental study finds that the primary bottleneck in collaborating effectively for current state-of-the-art agents is efficient natural language communication, with agent performance dropping as much as 15% when they are required to communicate detailed task completion plans. We conclude that existing LLM agents are ill-optimized for multi-agent collaboration, especially in embodied scenarios, and highlight the need to employ methods beyond in-context and imitation learning. Our website can be found here:this https URL

View on arXiv
@article{white2025_2504.17950,
  title={ Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning },
  author={ Isadora White and Kolby Nottingham and Ayush Maniar and Max Robinson and Hansen Lillemark and Mehul Maheshwari and Lianhui Qin and Prithviraj Ammanabrolu },
  journal={arXiv preprint arXiv:2504.17950},
  year={ 2025 }
}
Comments on this paper