While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations. We propose a causally-motivated balanced mini-batch sampling strategy to transform the observed train distribution to a balanced distribution that is free of spurious correlations. We argue that the Bayes optimal classifier trained on such balanced distribution is minimax optimal across a diverse enough environment space. We also provide an identifiability guarantee of the latent variable model of the proposed underlying data generation process with invariant causal mechanisms, by utilizing enough number of train environments. Experiments are conducted on three domain generalization datasets, demonstrating empirically that our balanced mini-batch sampling strategy improves the performance of four different established domain generalization model baselines compared to the random mini-batch sampling strategy.
View on arXiv