ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.04686
19
1

Image Synthesis and Style Transfer

15 January 2019
S. Phon-Amnuaisuk
    GAN
ArXiv (abs)PDFHTML
Abstract

Affine transformation, layer blending, and artistic filters are popular processes that graphic designers employ to transform pixels of an image to create a desired effect. Here, we examine various approaches that synthesize new images: pixel-based compositing models and in particular, distributed representations of deep neural network models. This paper focuses on synthesizing new images from a learned representation model obtained from the VGG network. This approach offers an interesting creative process from its distributed representation of information in hidden layers of a deep VGG network i.e., information such as contour, shape, etc. are effectively captured in hidden layers of neural networks. Conceptually, if Φ\PhiΦ is the function that transforms input pixels into distributed representations of VGG layers h{\bf h}h, a new synthesized image XXX can be generated from its inverse function, X=Φ−1(h)X = \Phi^{-1}({\bf h})X=Φ−1(h). We describe the concept behind the approach, present some representative synthesized images and style-transferred image examples.

View on arXiv
Comments on this paper