32
0

Learning Generalized and Flexible Trajectory Models from Omni-Semantic Supervision

Abstract

The widespread adoption of mobile devices and data collection technologies has led to an exponential increase in trajectory data, presenting significant challenges in spatio-temporal data mining, particularly for efficient and accurate trajectory retrieval. However, existing methods for trajectory retrieval face notable limitations, including inefficiencies in large-scale data, lack of support for condition-based queries, and reliance on trajectory similarity measures. To address the above challenges, we propose OmniTraj, a generalized and flexible omni-semantic trajectory retrieval framework that integrates four complementary modalities or semantics -- raw trajectories, topology, road segments, and regions -- into a unified system. Unlike traditional approaches that are limited to computing and processing trajectories as a single modality, OmniTraj designs dedicated encoders for each modality, which are embedded and fused into a shared representation space. This design enables OmniTraj to support accurate and flexible queries based on any individual modality or combination thereof, overcoming the rigidity of traditional similarity-based methods. Extensive experiments on two real-world datasets demonstrate the effectiveness of OmniTraj in handling large-scale data, providing flexible, multi-modality queries, and supporting downstream tasks and applications.

View on arXiv
@article{zhu2025_2505.17437,
  title={ Learning Generalized and Flexible Trajectory Models from Omni-Semantic Supervision },
  author={ Yuanshao Zhu and James Jianqiao Yu and Xiangyu Zhao and Xiao Han and Qidong Liu and Xuetao Wei and Yuxuan Liang },
  journal={arXiv preprint arXiv:2505.17437},
  year={ 2025 }
}
Comments on this paper