206
0

Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM

Abstract

Recently, multimodal large language models (MLLMs) have emerged as a key approach in achieving artificial general intelligence. In particular, vision-language MLLMs have been developed to generate not only text but also visual outputs from multimodal inputs. This advancement requires efficient image tokens that LLMs can process effectively both in input and output. However, existing image tokenization methods for MLLMs typically capture only global abstract concepts or uniformly segmented image patches, restricting MLLMs' capability to effectively understand or generate detailed visual content, particularly at the object level. To address this limitation, we propose an object-centric visual tokenizer based on Slot Attention specifically for MLLMs. In particular, based on the Q-Former encoder, diffusion decoder, and residual vector quantization, our proposed discretized slot tokens can encode local visual details while maintaining high-level semantics, and also align with textual data to be integrated seamlessly within a unified next-token prediction framework of LLMs. The resulting Slot-MLLM demonstrates significant performance improvements over baselines with previous visual tokenizers across various vision-language tasks that entail local detailed comprehension and generation. Notably, this work is the first demonstration of the feasibility of object-centric slot attention performed with MLLMs and in-the-wild natural images.

View on arXiv
@article{chi2025_2505.17726,
  title={ Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM },
  author={ Donghwan Chi and Hyomin Kim and Yoonjin Oh and Yongjin Kim and Donghoon Lee and Daejin Jo and Jongmin Kim and Junyeob Baek and Sungjin Ahn and Sungwoong Kim },
  journal={arXiv preprint arXiv:2505.17726},
  year={ 2025 }
}
Comments on this paper