Direct Adversarial Training: A New Approach for Stabilizing The Training
Process of GANs
- GAN
Generative Adversarial Networks (GANs) are the most popular models for image generation by optimizing discriminator and generator jointly and gradually. However, instability in training process is still one of the open problems for all GAN-based algorithms. In order to stabilize training, some regularization and normalization techniques have been proposed to make discriminator meet the Lipschitz continuity constraint. In this paper, a new approach inspired by works on adversarial attack is proposed to stabilize the training process of GANs. It is found that sometimes the images generated by the generator play a role just like adversarial examples for discriminator during the training process, which might be a part of the reason of the unstable training. With this discovery, we propose to introduce a adversarial training method into the training process of GANs to improve its stabilization. We prove that this DAT can limit the Lipschitz constant of the discriminator adaptively. The advanced performance of the proposed method is verified on multiple baseline and SOTA networks, such as DCGAN, WGAN, Spectral Normalization GAN, Self-supervised GAN and Information Maximum GAN.
View on arXiv