ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.13431
88
4
v1v2v3 (latest)

Implicit Counterfactual Data Augmentation for Deep Neural Networks

26 April 2023
Xiaoling Zhou
Ou Wu
    CMLOODBDL
ArXiv (abs)PDFHTML
Main:38 Pages
15 Figures
Bibliography:8 Pages
8 Tables
Abstract

Machine-learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations. However, explicitly generating counterfactual data is challenging, with the training efficiency declining. Therefore, this study proposes an implicit counterfactual data augmentation (ICDA) method to remove spurious correlations and make stable predictions. Specifically, first, a novel sample-wise augmentation strategy is developed that generates semantically and counterfactually meaningful deep features with distinct augmentation strength for each sample. Second, we derive an easy-to-compute surrogate loss on the augmented feature set when the number of augmented samples becomes infinite. Third, two concrete schemes are proposed, including direct quantification and meta-learning, to derive the key parameters for the robust loss. In addition, ICDA is explained from a regularization aspect, with extensive experiments indicating that our method consistently improves the generalization performance of popular depth networks on multiple typical learning scenarios that require out-of-distribution generalization.

View on arXiv
@article{zhou2025_2304.13431,
  title={ Implicit Counterfactual Data Augmentation for Robust Learning },
  author={ Xiaoling Zhou and Ou Wu and Michael K. Ng },
  journal={arXiv preprint arXiv:2304.13431},
  year={ 2025 }
}
Comments on this paper