ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19638
39
0
v1v2 (latest)

HF-VTON: High-Fidelity Virtual Try-On via Consistent Geometric and Semantic Alignment

26 May 2025
Ming Meng
Qi Dong
Jiajie Li
Zhe Zhu
Xingyu Wang
Zhaoxin Fan
Wei Zhao
Wenjun Wu
ArXiv (abs)PDFHTML
Main:11 Pages
8 Figures
Bibliography:2 Pages
1 Tables
Abstract

Virtual try-on technology has become increasingly important in the fashion and retail industries, enabling the generation of high-fidelity garment images that adapt seamlessly to target human models. While existing methods have achieved notable progress, they still face significant challenges in maintaining consistency across different poses. Specifically, geometric distortions lead to a lack of spatial consistency, mismatches in garment structure and texture across poses result in semantic inconsistency, and the loss or distortion of fine-grained details diminishes visual fidelity. To address these challenges, we propose HF-VTON, a novel framework that ensures high-fidelity virtual try-on performance across diverse poses. HF-VTON consists of three key modules: (1) the Appearance-Preserving Warp Alignment Module (APWAM), which aligns garments to human poses, addressing geometric deformations and ensuring spatial consistency; (2) the Semantic Representation and Comprehension Module (SRCM), which captures fine-grained garment attributes and multi-pose data to enhance semantic representation, maintaining structural, textural, and pattern consistency; and (3) the Multimodal Prior-Guided Appearance Generation Module (MPAGM), which integrates multimodal features and prior knowledge from pre-trained models to optimize appearance generation, ensuring both semantic and geometric consistency. Additionally, to overcome data limitations in existing benchmarks, we introduce the SAMP-VTONS dataset, featuring multi-pose pairs and rich textual annotations for a more comprehensive evaluation. Experimental results demonstrate that HF-VTON outperforms state-of-the-art methods on both VITON-HD and SAMP-VTONS, excelling in visual fidelity, semantic consistency, and detail preservation.

View on arXiv
@article{meng2025_2505.19638,
  title={ HF-VTON: High-Fidelity Virtual Try-On via Consistent Geometric and Semantic Alignment },
  author={ Ming Meng and Qi Dong and Jiajie Li and Zhe Zhu and Xingyu Wang and Zhaoxin Fan and Wei Zhao and Wenjun Wu },
  journal={arXiv preprint arXiv:2505.19638},
  year={ 2025 }
}
Comments on this paper