ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07553
32
3

COMMA: A Communicative Multimodal Multi-Agent Benchmark

10 October 2024
Timothy Ossowski
Jixuan Chen
Danyal Maqbool
Zefan Cai
Tyler Bradshaw
Junjie Hu
    VLM
ArXivPDFHTML
Abstract

The rapid advances of multimodal agents built on large foundation models have largely overlooked their potential for language-based communication between agents in collaborative tasks. This oversight presents a critical gap in understanding their effectiveness in real-world deployments, particularly when communicating with humans. Existing agentic benchmarks fail to address key aspects of inter-agent communication and collaboration, particularly in scenarios where agents have unequal access to information and must work together to achieve tasks beyond the scope of individual capabilities. To fill this gap, we introduce a novel benchmark designed to evaluate the collaborative performance of multimodal multi-agent systems through language communication. Our benchmark features a variety of scenarios, providing a comprehensive evaluation across four key categories of agentic capability in a communicative collaboration setting. By testing both agent-agent and agent-human collaborations using open-source and closed-source models, our findings reveal surprising weaknesses in state-of-the-art models, including proprietary models like GPT-4o. Some of these models struggle to outperform even a simple random agent baseline in agent-agent collaboration and only surpass the random baseline when a human is involved.

View on arXiv
@article{ossowski2025_2410.07553,
  title={ COMMA: A Communicative Multimodal Multi-Agent Benchmark },
  author={ Timothy Ossowski and Jixuan Chen and Danyal Maqbool and Zefan Cai and Tyler Bradshaw and Junjie Hu },
  journal={arXiv preprint arXiv:2410.07553},
  year={ 2025 }
}
Comments on this paper