ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00908
40
0

DS-VTON: High-Quality Virtual Try-on via Disentangled Dual-Scale Generation

1 June 2025
Xianbing Sun
Y. Hong
Jiahui Zhan
Jun Lan
Huijia Zhu
Weiqiang Wang
Liqing Zhang
Jianfu Zhang
    DiffM
ArXiv (abs)PDFHTML
Main:9 Pages
11 Figures
Bibliography:4 Pages
5 Tables
Appendix:7 Pages
Abstract

Despite recent progress, most existing virtual try-on methods still struggle to simultaneously address two core challenges: accurately aligning the garment image with the target human body, and preserving fine-grained garment textures and patterns. In this paper, we propose DS-VTON, a dual-scale virtual try-on framework that explicitly disentangles these objectives for more effective modeling. DS-VTON consists of two stages: the first stage generates a low-resolution try-on result to capture the semantic correspondence between garment and body, where reduced detail facilitates robust structural alignment. The second stage introduces a residual-guided diffusion process that reconstructs high-resolution outputs by refining the residual between the two scales, focusing on texture fidelity. In addition, our method adopts a fully mask-free generation paradigm, eliminating reliance on human parsing maps or segmentation masks. By leveraging the semantic priors embedded in pretrained diffusion models, this design more effectively preserves the person's appearance and geometric consistency. Extensive experiments demonstrate that DS-VTON achieves state-of-the-art performance in both structural alignment and texture preservation across multiple standard virtual try-on benchmarks.

View on arXiv
@article{sun2025_2506.00908,
  title={ DS-VTON: High-Quality Virtual Try-on via Disentangled Dual-Scale Generation },
  author={ Xianbing Sun and Yan Hong and Jiahui Zhan and Jun Lan and Huijia Zhu and Weiqiang Wang and Liqing Zhang and Jianfu Zhang },
  journal={arXiv preprint arXiv:2506.00908},
  year={ 2025 }
}
Comments on this paper