ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.00635
4
1

Bridging Unpaired Facial Photos And Sketches By Line-drawings

1 February 2021
Meimei Shang
Fei Gao
Xiang Li
Jingjie Zhu
Lingna Dai
    3DH
    CVBM
ArXivPDFHTML
Abstract

In this paper, we propose a novel method to learn face sketch synthesis models by using unpaired data. Our main idea is bridging the photo domain X\mathcal{X}X and the sketch domain YYY by using the line-drawing domain Z\mathcal{Z}Z. Specially, we map both photos and sketches to line-drawings by using a neural style transfer method, i.e. F:X/Y↦ZF: \mathcal{X}/\mathcal{Y} \mapsto \mathcal{Z}F:X/Y↦Z. Consequently, we obtain \textit{pseudo paired data} (Z,Y)(\mathcal{Z}, \mathcal{Y})(Z,Y), and can learn the mapping G:Z↦YG:\mathcal{Z} \mapsto \mathcal{Y}G:Z↦Y in a supervised learning manner. In the inference stage, given a facial photo, we can first transfer it to a line-drawing and then to a sketch by G∘FG \circ FG∘F. Additionally, we propose a novel stroke loss for generating different types of strokes. Our method, termed sRender, accords well with human artists' rendering process. Experimental results demonstrate that sRender can generate multi-style sketches, and significantly outperforms existing unpaired image-to-image translation methods.

View on arXiv
Comments on this paper