ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.11795
31
3

EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model

21 August 2024
Feipeng Ma
Yizhou Zhou
Hebei Li
Zilong He
Siying Wu
Fengyun Rao
Siying Wu
Fengyun Rao
Yueyi Zhang
Xiaoyan Sun
ArXivPDFHTML
Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated satisfactory performance across various vision-language tasks. Current approaches for vision and language interaction fall into two categories: self-attention-based and cross-attention-based methods. However, both approaches present inherent limitations, forcing a trade-off between data and computational efficiency. To address this issue, we introduce the Data-E\textbf{E}Efficient and Compute-E\textbf{E}Efficient MLLM\textbf{MLLM}MLLM (EE-MLLM\textbf{EE-MLLM}EE-MLLM). Specifically, we modify the original self-attention mechanism in MLLM to a composite attention mechanism. This mechanism has two key characteristics: 1) eliminating the computational overhead of self-attention among visual tokens to achieve compute efficiency\textbf{compute efficiency}compute efficiency, and 2) reusing the weights from each layer of LLM to facilitate effective vision-language modality alignment for data efficiency\textbf{data efficiency}data efficiency. As a result, EE-MLLM significantly outperforms Flamingo with limited training data, and reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277 ms. To further investigate the efficiency of EE-MLLM, we present a training-free variant named EE-MLLM-F, which reduces the computation cost of self-attention-based method without additional training. Experimental results demonstrate the effectiveness of EE-MLLM across a range of benchmarks, including general-purpose datasets like MMBench and SeedBench, as well as fine-grained tasks such as TextVQA and DocVQA.

View on arXiv
@article{ma2025_2408.11795,
  title={ EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model },
  author={ Feipeng Ma and Yizhou Zhou and Zheyu Zhang and Shilin Yan and Hebei Li and Zilong He and Siying Wu and Fengyun Rao and Yueyi Zhang and Xiaoyan Sun },
  journal={arXiv preprint arXiv:2408.11795},
  year={ 2025 }
}
Comments on this paper