ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16919
123
3

TAR3D: Creating High-Quality 3D Assets via Next-Part Prediction

13 March 2025
Xuying Zhang
Yutong Liu
Yangguang Li
Renrui Zhang
Y. Liu
Kai Wang
Wanli Ouyang
Zhiwei Xiong
Peng Gao
Qibin Hou
Ming-Ming Cheng
ArXivPDFHTML
Abstract

We present TAR3D, a novel framework that consists of a 3D-aware Vector Quantized-Variational AutoEncoder (VQ-VAE) and a Generative Pre-trained Transformer (GPT) to generate high-quality 3D assets. The core insight of this work is to migrate the multimodal unification and promising learning capabilities of the next-token prediction paradigm to conditional 3D object generation. To achieve this, the 3D VQ-VAE first encodes a wide range of 3D shapes into a compact triplane latent space and utilizes a set of discrete representations from a trainable codebook to reconstruct fine-grained geometries under the supervision of query point occupancy. Then, the 3D GPT, equipped with a custom triplane position embedding called TriPE, predicts the codebook index sequence with prefilling prompt tokens in an autoregressive manner so that the composition of 3D geometries can be modeled part by part. Extensive experiments on ShapeNet and Objaverse demonstrate that TAR3D can achieve superior generation quality over existing methods in text-to-3D and image-to-3D tasks

View on arXiv
@article{zhang2025_2412.16919,
  title={ TAR3D: Creating High-Quality 3D Assets via Next-Part Prediction },
  author={ Xuying Zhang and Yutong Liu and Yangguang Li and Renrui Zhang and Yufei Liu and Kai Wang and Wanli Ouyang and Zhiwei Xiong and Peng Gao and Qibin Hou and Ming-Ming Cheng },
  journal={arXiv preprint arXiv:2412.16919},
  year={ 2025 }
}
Comments on this paper