19
0

MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements

Main:8 Pages
9 Figures
Bibliography:4 Pages
12 Tables
Appendix:5 Pages
Abstract

The growing prevalence of digital health technologies has led to the generation of complex multi-modal data, such as physical activity measurements simultaneously collected from various sensors of mobile and wearable devices. These data hold immense potential for advancing health studies, but current methods predominantly rely on supervised learning, requiring extensive labeled datasets that are often expensive or impractical to obtain, especially in clinical studies. To address this limitation, we propose a self-supervised learning framework called Multi-modal Cross-masked Autoencoder (MoCA) that leverages cross-modality masking and the Transformer autoencoder architecture to utilize both temporal correlations within modalities and cross-modal correlations between data streams. We also provide theoretical guarantees to support the effectiveness of the cross-modality masking scheme in MoCA. Comprehensive experiments and ablation studies demonstrate that our method outperforms existing approaches in both reconstruction and downstream tasks. We release open-source code for data processing, pre-training, and downstream tasks in the supplementary materials. This work highlights the transformative potential of self-supervised learning in digital health and multi-modal data.

View on arXiv
@article{ryu2025_2506.02260,
  title={ MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements },
  author={ Howon Ryu and Yuliang Chen and Yacun Wang and Andrea Z. LaCroix and Chongzhi Di and Loki Natarajan and Yu Wang and Jingjing Zou },
  journal={arXiv preprint arXiv:2506.02260},
  year={ 2025 }
}
Comments on this paper