63

DenseMLLM: Standard Multimodal LLMs are Intrinsic Dense Predictors

Yi Li
Hongze Shen
Lexiang Tang
Xin Li
Xinpeng Ding
Yinsong Liu
Deqiang Jiang
Xing Sun
Xiaomeng Li
Main:8 Pages
9 Figures
Bibliography:3 Pages
9 Tables
Appendix:14 Pages
Abstract

Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in high-level visual understanding. However, extending these models to fine-grained dense prediction tasks, such as semantic segmentation and depth estimation, typically necessitates the incorporation of complex, task-specific decoders and other customizations. This architectural fragmentation increases model complexity and deviates from the generalist design of MLLMs, ultimately limiting their practicality. In this work, we challenge this paradigm by accommodating standard MLLMs to perform dense predictions without requiring additional task-specific decoders. The proposed model is called DenseMLLM, grounded in the standard architecture with a novel vision token supervision strategy for multiple labels and tasks. Despite its minimalist design, our model achieves highly competitive performance across a wide range of dense prediction and vision-language benchmarks, demonstrating that a standard, general-purpose MLLM can effectively support dense perception without architectural specialization.

View on arXiv
Comments on this paper