ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06927
29
5
v1v2v3v4v5 (latest)

Inverting Adversarially Robust Networks for Image Synthesis

13 June 2021
Renan A. Rojas-Gomez
Raymond A. Yeh
Minh Do
A. Nguyen
ArXiv (abs)PDFHTML
Abstract

Despite unconditional feature inversion being the foundation of many image synthesis applications, training an inverter demands a high computational budget, large decoding capacity and imposing conditions such as autoregressive priors. To address these limitations, we propose the use of adversarially robust representations as a perceptual primitive for feature inversion. We train an adversarially robust encoder to extract disentangled and perceptually-aligned image representations, making them easily invertible. By training a simple generator with the mirror architecture of the encoder, we achieve superior reconstruction quality and generalization over standard models. Based on this, we propose an adversarially robust autoencoder and demonstrate its improved performance on anomaly detection, style transfer and image denoising tasks. Comparisons against recent learn-based methods show that our model attains improved performance with significantly less complexity.

View on arXiv
Comments on this paper