ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08695
21
0

SPAST: Arbitrary Style Transfer with Style Priors via Pre-trained Large-scale Model

13 May 2025
Zhanjie Zhang
Quanwei Zhang
Junsheng Luan
Mengyuan Yang
Yun Wang
Lei Zhao
ArXivPDFHTML
Abstract

Given an arbitrary content and style image, arbitrary style transfer aims to render a new stylizedimage which preserves the content image's structure and possesses the style image's style. Existingarbitrary style transfer methods are based on either small models or pre-trained large-scale models.The small model-based methods fail to generate high-quality stylized images, bringing artifacts anddisharmonious patterns. The pre-trained large-scale model-based methods can generate high-qualitystylized images but struggle to preserve the content structure and cost long inference time. To thisend, we propose a new framework, called SPAST, to generate high-quality stylized images withless inference time. Specifically, we design a novel Local-global Window Size Stylization Module(LGWSSM)tofuse style features into content features. Besides, we introduce a novel style prior loss,which can dig out the style priors from a pre-trained large-scale model into the SPAST and motivatethe SPAST to generate high-quality stylized images with short inferencethis http URLconduct abundantexperiments to verify that our proposed method can generate high-quality stylized images and lessinference time compared with the SOTA arbitrary style transfer methods.

View on arXiv
@article{zhang2025_2505.08695,
  title={ SPAST: Arbitrary Style Transfer with Style Priors via Pre-trained Large-scale Model },
  author={ Zhanjie Zhang and Quanwei Zhang and Junsheng Luan and Mengyuan Yang and Yun Wang and Lei Zhao },
  journal={arXiv preprint arXiv:2505.08695},
  year={ 2025 }
}
Comments on this paper