OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable?
- OODDOOD

Out-of-distribution (OOD) generalization is challenging because distribution shifts come in many forms. Numerous algorithms exist to address specific settings, but choosing the right training algorithm for the right dataset without trial and error is difficult. Indeed, real-world applications often involve multiple types and combinations of shifts that are hard to analyze theoretically.Method. This work explores the possibility of learning the selection of a training algorithm for OOD generalization. We propose a proof of concept (OOD-Chameleon) that formulates the selection as a multi-label classification over candidate algorithms, trained on a dataset of datasets representing a variety of shifts. We evaluate the ability of OOD-Chameleon to rank algorithms on unseen shifts and datasets based only on dataset characteristics, i.e., without training models first, unlike traditional model selection.Findings. Extensive experiments show that the learned selector identifies high-performing algorithms across synthetic, vision, and language tasks. Further inspection shows that it learns non-trivial decision rules, which provide new insights into the applicability of existing algorithms. Overall, this new approach opens the possibility of better exploiting and understanding the plethora of existing algorithms for OOD generalization.
View on arXiv@article{jiang2025_2410.02735, title={ OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable? }, author={ Liangze Jiang and Damien Teney }, journal={arXiv preprint arXiv:2410.02735}, year={ 2025 } }