Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation \cite{krizhevsky2012imagenet} alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13\% increase in accuracy in the low-data regime experiments in Omniglot (from 69\% to 82\%), EMNIST (73.9\% to 76\%) and VGG-Face (4.5\% to 12\%); in Matching Networks for Omniglot we observe an increase of 0.5\% (from 96.9\% to 97.4\%) and an increase of 1.8\% in EMNIST (from 59.5\% to 61.3\%).
View on arXiv