ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21626
60
0

Learning Where to Learn: Training Distribution Selection for Provable OOD Performance

27 May 2025
Nicolas Guerra
Nicholas H. Nelsen
Yunan Yang
    OOD
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:4 Pages
3 Tables
Appendix:19 Pages
Abstract

Out-of-distribution (OOD) generalization remains a fundamental challenge in machine learning. Models trained on one data distribution often experience substantial performance degradation when evaluated on shifted or unseen domains. To address this challenge, the present paper studies the design of training data distributions that maximize average-case OOD performance. First, a theoretical analysis establishes a family of generalization bounds that quantify how the choice of training distribution influences OOD error across a predefined family of target distributions. These insights motivate the introduction of two complementary algorithmic strategies: (i) directly formulating OOD risk minimization as a bilevel optimization problem over the space of probability measures and (ii) minimizing a theoretical upper bound on OOD error. Last, the paper evaluates the two approaches across a range of function approximation and operator learning examples. The proposed methods significantly improve OOD accuracy over standard empirical risk minimization with a fixed distribution. These results highlight the potential of distribution-aware training as a principled and practical framework for robust OOD generalization.

View on arXiv
@article{guerra2025_2505.21626,
  title={ Learning Where to Learn: Training Distribution Selection for Provable OOD Performance },
  author={ Nicolas Guerra and Nicholas H. Nelsen and Yunan Yang },
  journal={arXiv preprint arXiv:2505.21626},
  year={ 2025 }
}
Comments on this paper