In this paper, we propose a novel method to learn face sketch synthesis models by using unpaired data. Our main idea is bridging the photo domain and the sketch domain by using the line-drawing domain . Specially, we map both photos and sketches to line-drawings by using a neural style transfer method, i.e. . Consequently, we obtain \textit{pseudo paired data} , and can learn the mapping in a supervised learning manner. In the inference stage, given a facial photo, we can first transfer it to a line-drawing and then to a sketch by . Additionally, we propose a novel stroke loss for generating different types of strokes. Our method, termed sRender, accords well with human artists' rendering process. Experimental results demonstrate that sRender can generate multi-style sketches, and significantly outperforms existing unpaired image-to-image translation methods.
View on arXiv