ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04562
82
0

Handle-based Mesh Deformation Guided By Vision Language Model

5 June 2025
Xingpeng Sun
Shiyang Jia
Zherong Pan
Kui Wu
Aniket Bera
ArXiv (abs)PDFHTML
Abstract

Mesh deformation is a fundamental tool in 3D content manipulation. Despite extensive prior research, existing approaches often suffer from low output quality, require significant manual tuning, or depend on data-intensive training. To address these limitations, we introduce a training-free, handle-based mesh deformation method. % Our core idea is to leverage a Vision-Language Model (VLM) to interpret and manipulate a handle-based interface through prompt engineering. We begin by applying cone singularity detection to identify a sparse set of potential handles. The VLM is then prompted to select both the deformable sub-parts of the mesh and the handles that best align with user instructions. Subsequently, we query the desired deformed positions of the selected handles in screen space. To reduce uncertainty inherent in VLM predictions, we aggregate the results from multiple camera views using a novel multi-view voting scheme. % Across a suite of benchmarks, our method produces deformations that align more closely with user intent, as measured by CLIP and GPTEval3D scores, while introducing low distortion -- quantified via membrane energy. In summary, our approach is training-free, highly automated, and consistently delivers high-quality mesh deformations.

View on arXiv
@article{sun2025_2506.04562,
  title={ Handle-based Mesh Deformation Guided By Vision Language Model },
  author={ Xingpeng Sun and Shiyang Jia and Zherong Pan and Kui Wu and Aniket Bera },
  journal={arXiv preprint arXiv:2506.04562},
  year={ 2025 }
}
Main:7 Pages
9 Figures
Bibliography:2 Pages
5 Tables
Appendix:1 Pages
Comments on this paper