Language is often used to describe physical interaction, yet most 3D human pose estimation methods overlook this rich source of information. We bridge this gap by leveraging large multimodal models (LMMs) as priors for reconstructing contact poses, offering a scalable alternative to traditional methods that rely on human annotations or motion capture data. Our approach extracts contact-relevant descriptors from an LMM and translates them into tractable losses to constrain 3D human pose optimization. Despite its simplicity, our method produces compelling reconstructions for both two-person interactions and self-contact scenarios, accurately capturing the semantics of physical and social interactions. Our results demonstrate that LMMs can serve as powerful tools for contact prediction and pose estimation, offering an alternative to costly manual human annotations or motion capture data. Our code is publicly available atthis https URL.
View on arXiv@article{subramanian2025_2405.03689, title={ Pose Priors from Language Models }, author={ Sanjay Subramanian and Evonne Ng and Lea Müller and Dan Klein and Shiry Ginosar and Trevor Darrell }, journal={arXiv preprint arXiv:2405.03689}, year={ 2025 } }