18
0

Multimodal Foundation Model for Cross-Modal Retrieval and Activity Recognition Tasks

Main:22 Pages
8 Figures
Bibliography:3 Pages
4 Tables
Abstract

In recent years, the widespread adoption of wearable devices has highlighted the growing importance of behavior analysis using IMU. While applications span diverse fields such as healthcare and robotics, recent studies have increasingly focused on multimodal analysis, in addition to unimodal analysis. Several studies have proposed multimodal foundation models that incorporate first-person video and text data; however, these models still fall short in providing a detailed analysis of full-body human activity. To address this limitation, we propose Activity Understanding and Representations Alignment - Multimodal Foundation Model (AURA-MFM), a foundational model integrating four modalities: third-person video, motion capture, IMU, and text. By incorporating third-person video and motion capture data, the model enables a detailed and multidimensional understanding of human activity, which first-person perspectives alone fail to capture. Additionally, a Transformer-based IMU encoder is employed to enhance the model's overall performance. Experimental evaluations on retrieval and activity recognition tasks demonstrate that our model surpasses existing methods. Notably, in the zero-shot classification for action recognition, our method achieved significantly higher performance, with an F1-score of 0.6226 and an accuracy of 0.7320, whereas the existing method recorded an F1-score of 0.0747 and an accuracy of 0.1961.

View on arXiv
@article{matsuishi2025_2506.03174,
  title={ Multimodal Foundation Model for Cross-Modal Retrieval and Activity Recognition Tasks },
  author={ Koki Matsuishi and Kosuke Ukita and Tsuyoshi Okita },
  journal={arXiv preprint arXiv:2506.03174},
  year={ 2025 }
}
Comments on this paper