ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.09022
106
7
v1v2 (latest)

DART-LLM: Dependency-Aware Multi-Robot Task Decomposition and Execution using Large Language Models

13 November 2024
Yongdong Wang
Runze Xiao
Jun Younes Louhi Kasahara
Ryosuke Yajima
Keiji Nagatani
Atsushi Yamashita
Hajime Asama
ArXiv (abs)PDFHTML
Abstract

Large Language Models (LLMs) have demonstrated significant reasoning capabilities in robotic systems. However, their deployment in multi-robot systems remains fragmented and struggles to handle complex task dependencies and parallel execution. This study introduces the DART-LLM (Dependency-Aware Multi-Robot Task Decomposition and Execution using Large Language Models) system, designed to address these challenges. DART-LLM utilizes LLMs to parse natural language instructions, decomposing them into multiple subtasks with dependencies to establish complex task sequences, thereby enhancing efficient coordination and parallel execution in multi-robot systems. The system includes the QA LLM module, Breakdown Function modules, Actuation module, and a Vision-Language Model (VLM)-based object detection module, enabling task decomposition and execution from natural language instructions to robotic actions. Experimental results demonstrate that DART-LLM excels in handling long-horizon tasks and collaborative tasks with complex dependencies. Even when using smaller models like Llama 3.1 8B, the system achieves good performance, highlighting DART-LLM's robustness in terms of model size. Please refer to the project website \url{https://wyd0817.github.io/project-dart-llm/} for videos and code.

View on arXiv
@article{wang2025_2411.09022,
  title={ DART-LLM: Dependency-Aware Multi-Robot Task Decomposition and Execution using Large Language Models },
  author={ Yongdong Wang and Runze Xiao and Jun Younes Louhi Kasahara and Ryosuke Yajima and Keiji Nagatani and Atsushi Yamashita and Hajime Asama },
  journal={arXiv preprint arXiv:2411.09022},
  year={ 2025 }
}
Comments on this paper