28
2

Challenges and Opportunities in Improving Worst-Group Generalization in Presence of Spurious Features

Abstract

Deep neural networks often exploit *spurious* features that are present in the majority of examples within a class during training. This leads to *poor worst-group test accuracy*, i.e., poor accuracy for minority groups that lack these spurious features. Despite the growing body of recent efforts to address spurious correlations (SC), several challenging settings remainthis http URLthis work, we propose studying methods to mitigate SC in settings with: 1) spurious features that are learned more slowly, 2) a larger number of classes, and 3) a larger number of groups. We introduce two new datasets, Animals and SUN, to facilitate this study and conduct a systematic benchmarking of 8 state-of-the-art (SOTA) methods across a total of 5 vision datasets, training over 5,000 models. Through this, we highlight how existing group inference methods struggle in the presence of spurious features that are learned later in training. Additionally, we demonstrate how all existing methods struggle in settings with more groups and/or classes. Finally, we show the importance of careful model selection (hyperparameter tuning) in extracting optimal performance, especially in the more challenging settings we introduced, and propose more cost-efficient strategies for model selection. Overall, through extensive and systematic experiments, this work uncovers a suite of new challenges and opportunities for improving worst-group generalization in the presence of spurious features. Our datasets, methods and scripts available atthis https URL.

View on arXiv
@article{joshi2025_2306.11957,
  title={ Challenges and Opportunities in Improving Worst-Group Generalization in Presence of Spurious Features },
  author={ Siddharth Joshi and Yu Yang and Yihao Xue and Wenhan Yang and Baharan Mirzasoleiman },
  journal={arXiv preprint arXiv:2306.11957},
  year={ 2025 }
}
Comments on this paper