333
944

Out-of-Distribution Generalization via Risk Extrapolation (REx)

Abstract

Generalizing outside of the training distribution is an open challenge for current machine learning systems. A weak form of out-of-distribution (OoD) generalization is met when a model is able to interpolate between multiple observed distributions. One way to achieve this is through robust optimization where an argmax over convex combinations of the risk on different training distributions is minimized. However, a much stronger form of OoD generalization is the ability of models to extrapolate beyond the distributions observed during training. Current methods targeting extrapolation are either not scalable or require adversarial training procedures. We introduce risk extrapolation (REx) as a simpler, yet effective alternative to previous approaches. REx can be viewed as encouraging robustness over affine combinations of training risks, by encouraging strict equality between training risks. We show conceptually how this principle enables extrapolation, and demonstrate the effectiveness and scalability of instantiations of REx on various generalization benchmarks.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.