422

Imperfect ImaGANation: Implications of GANs Exacerbating Biases on Facial Data Augmentation and Snapchat Selfie Lenses

Abstract

The use of synthetic data generated by Generative Adversarial Networks (GANs) is widely used for a variety of tasks ranging from data augmentation to stylizing images. While practitioners celebrate this method as an economical way to obtain synthetic data to train data-hungry machine learning models or provide new features to users of mobile applications, it is unclear that they recognize the perils of such techniques when applied to an already-biased dataset. Although one expects GANs to replicate the distribution of the original data, in real-world settings with limited data and finite network capacity, GANs suffer from mode-collapse. In this paper, we show that popular (conditional and unconditional) GAN variants exacerbate biases along the axes of gender and skin tone in the generated data. First, we show readily accessible GAN variants such as DCGANs 'imagine' faces of synthetic engineering professors that have masculine facial features and fair skin tones. Further, architectures such as AdaGAN, ProGAN that attempt to address mode collapse issue cannot completely correct this behavior. Second, we show that a conditional GAN variant transforms input images of female faces to have more masculine features when asked to generate faces of engineering professors. Worse yet, prevalent filters on Snapchat end up consistently lightening the skin tones in women of color when trying to make face images appear more feminine. Thus, our study is meant to serve as a cautionary tale for practitioners and educate them about the side-effect of bias amplification when applying GAN-based techniques.

View on arXiv
Comments on this paper