ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01853
43
0

ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding

2 June 2025
Junliang Ye
Zhengyi Wang
Ruowen Zhao
Shenghao Xie
Jun Zhu
ArXiv (abs)PDFHTML
Main:11 Pages
15 Figures
Bibliography:7 Pages
10 Tables
Appendix:7 Pages
Abstract

Recently, the powerful text-to-image capabilities of ChatGPT-4o have led to growing appreciation for native multimodal large language models. However, its multimodal capabilities remain confined to images and text. Yet beyond images, the ability to understand and generate 3D content is equally crucial. To address this gap, we propose ShapeLLM-Omni-a native 3D large language model capable of understanding and generating 3D assets and text in any sequence. First, we train a 3D vector-quantized variational autoencoder (VQVAE), which maps 3D objects into a discrete latent space to achieve efficient and accurate shape representation and reconstruction. Building upon the 3D-aware discrete tokens, we innovatively construct a large-scale continuous training dataset named 3D-Alpaca, encompassing generation, comprehension, and editing, thus providing rich resources for future research and training. Finally, by performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on the 3D-Alpaca dataset. Our work provides an effective attempt at extending multimodal models with basic 3D capabilities, which contributes to future research in 3D-native AI. Project page:this https URL

View on arXiv
@article{ye2025_2506.01853,
  title={ ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding },
  author={ Junliang Ye and Zhengyi Wang and Ruowen Zhao and Shenghao Xie and Jun Zhu },
  journal={arXiv preprint arXiv:2506.01853},
  year={ 2025 }
}
Comments on this paper