ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.17113
10
0

MEXA: Towards General Multimodal Reasoning with Dynamic Multi-Expert Aggregation

20 June 2025
Shoubin Yu
Yue Zhang
Ziyang Wang
Jaehong Yoon
Mohit Bansal
    MoELRM
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:4 Pages
5 Tables
Appendix:3 Pages
Abstract

Combining pre-trained expert models offers substantial potential for scalable multimodal reasoning, but building a unified framework remains challenging due to the increasing diversity of input modalities and task complexity. For instance, medical diagnosis requires precise reasoning over structured clinical tables, while financial forecasting depends on interpreting plot-based data to make informed predictions. To tackle this challenge, we introduce MEXA, a training-free framework that performs modality- and task-aware aggregation of multiple expert models to enable effective multimodal reasoning across diverse and distinct domains. MEXA dynamically selects expert models based on the input modality and the task-specific reasoning demands (i.e., skills). Each expert model, specialized in a modality task pair, generates interpretable textual reasoning outputs. MEXA then aggregates and reasons over these outputs using a Large Reasoning Model (LRM) to produce the final answer. This modular design allows flexible and transparent multimodal reasoning across diverse domains without additional training overhead. We extensively evaluate our approach on diverse multimodal benchmarks, including Video Reasoning, Audio Reasoning, 3D Understanding, and Medical QA. MEXA consistently delivers performance improvements over strong multimodal baselines, highlighting the effectiveness and broad applicability of our expert-driven selection and aggregation in diverse multimodal reasoning tasks.

View on arXiv
@article{yu2025_2506.17113,
  title={ MEXA: Towards General Multimodal Reasoning with Dynamic Multi-Expert Aggregation },
  author={ Shoubin Yu and Yue Zhang and Ziyang Wang and Jaehong Yoon and Mohit Bansal },
  journal={arXiv preprint arXiv:2506.17113},
  year={ 2025 }
}
Comments on this paper