ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.04340
154
1076
v1v2v3 (latest)

Data Augmentation Generative Adversarial Networks

12 November 2017
Antreas Antoniou
Amos Storkey
Harrison Edwards
    MedImGAN
ArXiv (abs)PDFHTML
Abstract

Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation \cite{krizhevsky2012imagenet} alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13\% increase in accuracy in the low-data regime experiments in Omniglot (from 69\% to 82\%), EMNIST (73.9\% to 76\%) and VGG-Face (4.5\% to 12\%); in Matching Networks for Omniglot we observe an increase of 0.5\% (from 96.9\% to 97.4\%) and an increase of 1.8\% in EMNIST (from 59.5\% to 61.3\%).

View on arXiv
Comments on this paper