ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13519
85
0

Continuous Domain Generalization

17 May 2025
Zekun Cai
Yiheng Yao
Guangji Bai
Renhe Jiang
Xuan Song
Ryosuke Shibasaki
Liang Zhao
    OOD
ArXiv (abs)PDFHTML
Main:9 Pages
10 Figures
Bibliography:4 Pages
3 Tables
Appendix:9 Pages
Abstract

Real-world data distributions often shift continuously across multiple latent factors such as time, geography, and socioeconomic context. However, existing domain generalization approaches typically treat domains as discrete or evolving along a single axis (e.g., time), which fails to capture the complex, multi-dimensional nature of real-world variation. This paper introduces the task of Continuous Domain Generalization (CDG), which aims to generalize predictive models to unseen domains defined by arbitrary combinations of continuous variation descriptors. We present a principled framework grounded in geometric and algebraic theory, showing that optimal model parameters across domains lie on a low-dimensional manifold. To model this structure, we propose a Neural Lie Transport Operator (NeuralLTO), which enables structured parameter transitions by enforcing geometric continuity and algebraic consistency. To handle noisy or incomplete domain descriptors, we introduce a gating mechanism to suppress irrelevant dimensions and a local chart-based strategy for robust generalization. Extensive experiments on synthetic and real-world datasets-including remote sensing, scientific documents, and traffic forecasting-demonstrate that our method significantly outperforms existing baselines in generalization accuracy and robustness under descriptor imperfections.

View on arXiv
@article{cai2025_2505.13519,
  title={ Continuous Domain Generalization },
  author={ Zekun Cai and Yiheng Yao and Guangji Bai and Renhe Jiang and Xuan Song and Ryosuke Shibasaki and Liang Zhao },
  journal={arXiv preprint arXiv:2505.13519},
  year={ 2025 }
}
Comments on this paper