ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20460
43
0

DIPO: Dual-State Images Controlled Articulated Object Generation Powered by Diverse Data

26 May 2025
Ruiqi Wu
Xinjie Wang
Liu Liu
Chunle Guo
Jiaxiong Qiu
Chongyi Li
Lichao Huang
Zhizhong Su
Ming-Ming Cheng
    VGen
ArXivPDFHTML
Abstract

We present DIPO, a novel framework for the controllable generation of articulated 3D objects from a pair of images: one depicting the object in a resting state and the other in an articulated state. Compared to the single-image approach, our dual-image input imposes only a modest overhead for data collection, but at the same time provides important motion information, which is a reliable guide for predicting kinematic relationships between parts. Specifically, we propose a dual-image diffusion model that captures relationships between the image pair to generate part layouts and joint parameters. In addition, we introduce a Chain-of-Thought (CoT) based graph reasoner that explicitly infers part connectivity relationships. To further improve robustness and generalization on complex articulated objects, we develop a fully automated dataset expansion pipeline, name LEGO-Art, that enriches the diversity and complexity of PartNet-Mobility dataset. We propose PM-X, a large-scale dataset of complex articulated 3D objects, accompanied by rendered images, URDF annotations, and textual descriptions. Extensive experiments demonstrate that DIPO significantly outperforms existing baselines in both the resting state and the articulated state, while the proposed PM-X dataset further enhances generalization to diverse and structurally complex articulated objects. Our code and dataset will be released to the community upon publication.

View on arXiv
@article{wu2025_2505.20460,
  title={ DIPO: Dual-State Images Controlled Articulated Object Generation Powered by Diverse Data },
  author={ Ruiqi Wu and Xinjie Wang and Liu Liu and Chunle Guo and Jiaxiong Qiu and Chongyi Li and Lichao Huang and Zhizhong Su and Ming-Ming Cheng },
  journal={arXiv preprint arXiv:2505.20460},
  year={ 2025 }
}
Comments on this paper