100
4

HOIGPT: Learning Long Sequence Hand-Object Interaction with Language Models

Abstract

We introduce HOIGPT, a token-based generative method that unifies 3D hand-object interactions (HOI) perception and generation, offering the first comprehensive solution for captioning and generating high-quality 3D HOI sequences from a diverse range of conditional signals (\eg text, objects, partial sequences). At its core, HOIGPT utilizes a large language model to predict the bidrectional transformation between HOI sequences and natural language descriptions. Given text inputs, HOIGPT generates a sequence of hand and object meshes; given (partial) HOI sequences, HOIGPT generates text descriptions and completes the sequences. To facilitate HOI understanding with a large language model, this paper introduces two key innovations: (1) a novel physically grounded HOI tokenizer, the hand-object decomposed VQ-VAE, for discretizing HOI sequences, and (2) a motion-aware language model trained to process and generate both text and HOI tokens. Extensive experiments demonstrate that HOIGPT sets new state-of-the-art performance on both text generation (+2.01% R Precision) and HOI generation (-2.56 FID) across multiple tasks and benchmarks.

View on arXiv
@article{huang2025_2503.19157,
  title={ HOIGPT: Learning Long Sequence Hand-Object Interaction with Language Models },
  author={ Mingzhen Huang and Fu-Jen Chu and Bugra Tekin and Kevin J Liang and Haoyu Ma and Weiyao Wang and Xingyu Chen and Pierre Gleize and Hongfei Xue and Siwei Lyu and Kris Kitani and Matt Feiszli and Hao Tang },
  journal={arXiv preprint arXiv:2503.19157},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.