5
0

Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens

Main:9 Pages
11 Figures
Bibliography:3 Pages
10 Tables
Appendix:8 Pages
Abstract

Vision-language models (VLMs) excel at multimodal understanding, yet their text-only decoding forces them to verbalize visual reasoning, limiting performance on tasks that demand visual imagination. Recent attempts train VLMs to render explicit images, but the heavy image-generation pre-training often hinders the reasoning ability. Inspired by the way humans reason with mental imagery-the internal construction and manipulation of visual cues-we investigate whether VLMs can reason through interleaved multimodal trajectories without producing explicit images. To this end, we present a Machine Mental Imagery framework, dubbed as Mirage, which augments VLM decoding with latent visual tokens alongside ordinary text. Concretely, whenever the model chooses to ``think visually'', it recasts its hidden states as next tokens, thereby continuing a multimodal trajectory without generating pixel-level images. Begin by supervising the latent tokens through distillation from ground-truth image embeddings, we then switch to text-only supervision to make the latent trajectory align tightly with the task objective. A subsequent reinforcement learning stage further enhances the multimodal reasoning capability. Experiments on diverse benchmarks demonstrate that Mirage unlocks stronger multimodal reasoning without explicit image generation.

View on arXiv
@article{yang2025_2506.17218,
  title={ Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens },
  author={ Zeyuan Yang and Xueyang Yu and Delin Chen and Maohao Shen and Chuang Gan },
  journal={arXiv preprint arXiv:2506.17218},
  year={ 2025 }
}
Comments on this paper