ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17973
45
3

PhysTwin: Physics-Informed Reconstruction and Simulation of Deformable Objects from Videos

23 March 2025
Hanxiao Jiang
Hao-Yu Hsu
Kaifeng Zhang
Hsin-Ni Yu
Shenlong Wang
Yunzhu Li
    VGen
    AI4CE
ArXivPDFHTML
Abstract

Creating a physical digital twin of a real-world object has immense potential in robotics, content creation, and XR. In this paper, we present PhysTwin, a novel framework that uses sparse videos of dynamic objects under interaction to produce a photo- and physically realistic, real-time interactive virtual replica. Our approach centers on two key components: (1) a physics-informed representation that combines spring-mass models for realistic physical simulation, generative shape models for geometry, and Gaussian splats for rendering; and (2) a novel multi-stage, optimization-based inverse modeling framework that reconstructs complete geometry, infers dense physical properties, and replicates realistic appearance from videos. Our method integrates an inverse physics framework with visual perception cues, enabling high-fidelity reconstruction even from partial, occluded, and limited viewpoints. PhysTwin supports modeling various deformable objects, including ropes, stuffed animals, cloth, and delivery packages. Experiments show that PhysTwin outperforms competing methods in reconstruction, rendering, future prediction, and simulation under novel interactions. We further demonstrate its applications in interactive real-time simulation and model-based robotic motion planning.

View on arXiv
@article{jiang2025_2503.17973,
  title={ PhysTwin: Physics-Informed Reconstruction and Simulation of Deformable Objects from Videos },
  author={ Hanxiao Jiang and Hao-Yu Hsu and Kaifeng Zhang and Hsin-Ni Yu and Shenlong Wang and Yunzhu Li },
  journal={arXiv preprint arXiv:2503.17973},
  year={ 2025 }
}
Comments on this paper